[go: nahoru, domu]

It is an interesting time for everyone concerned with open source vulnerabilities. The U.S. Executive Order on Improving the Nation's Cybersecurity requirements for vulnerability disclosure programs and assurances for software used by the US government will go into effect later this year. Finding and fixing security vulnerabilities has never been more important, yet with increasing interest in the area, the vulnerability management space has become fragmented—there are a lot of new tools and competing standards.

In 2021, we announced the launch of OSV, a database of open source vulnerabilities built partially from vulnerabilities found through Google’s OSS-Fuzz program. OSV has grown since then and now includes a widely adopted OpenSSF schema and a vulnerability scanner. In this blog post, we’ll cover how these tools help maintainers track vulnerabilities from discovery to remediation, and how to use OSV together with other SBOM and VEX standards.

Vulnerability Databases

The lifecycle of a known vulnerability begins when it is discovered. To reach developers, the vulnerability needs to be added to a database. CVEs are the industry standard for describing vulnerabilities across all software, but there was a lack of an open source centric database. As a result, several independent vulnerability databases exist across different ecosystems.

To address this, we announced the OSV Schema to unify open source vulnerability databases. The schema is machine readable, and is designed so dependencies can be easily matched to vulnerabilities using automation. The OSV Schema remains the only widely adopted schema that treats open source as a first class citizen. Since becoming a part of OpenSSF, the OSV Schema has seen adoption from services like GitHub, ecosystems such as Rust and Python, and Linux distributions such as Rocky Linux.

Thanks to such wide community adoption of the OSV Schema, OSV.dev is able to provide a distributed vulnerability database and service that pulls from language specific authoritative sources. In total, the OSV.dev database now includes 43,302 vulnerabilities from 16 ecosystems as of March 2023. Users can check OSV for a comprehensive view of all known vulnerabilities in open source.

Every vulnerability in OSV.dev contains package manager versions and git commit hashes, so open source users can easily determine if their packages are impacted because of the familiar style of versioning. Maintainers are also familiar with OSV’s community driven and distributed collaboration on the development of OSV’s database, tools, and schema.

Matching

The next step in managing vulnerabilities is to determine project dependencies and their associated vulnerabilities. Last December we released OSV-Scanner, a free, open source tool which scans software projects’ lockfiles, SBOMs, or git repositories to identify vulnerabilities found in the OSV.dev database. When a project is scanned, the user gets a list of all known vulnerabilities in the project.

In the two months since launch, OSV-Scanner has seen positive reception from the community, including over 4,600 stars and 130 PRs from 29 contributors. Thank you to the community, which has been incredibly helpful in identifying bugs, supporting new lockfile formats, and helping us prioritize new features for the tool.

Remediation

Once a vulnerability has been identified, it needs to be remediated. Removing a vulnerability through upgrading the package is often not as simple as it seems. Sometimes an upgrade will break your project or cause another dependency to not function correctly. These complex dependency graph constraints can be difficult to resolve. We’re currently working on building features in OSV-Scanner to improve this process by suggesting minimal upgrade paths.

Sometimes, it isn’t even necessary to upgrade a package. A vulnerable component may be present in a project, but that doesn’t mean it is exploitable–and VEX statements provide this information to help in prioritization of vulnerability remediation. For example, it may not be necessary to update a vulnerable component if it is never called. In cases like this, a VEX (Vulnerability Exploitability eXchange) statement can provide this justification.

Manually generating VEX statements is time intensive and complex, requiring deep expertise in the project’s codebase and libraries included in its dependency tree. These costs are barriers to VEX adoption at scale, so we’re working on the ability to auto-generate high quality VEX statements based on static analysis and manual ignore files. The format for this will likely be one or more of the current emerging VEX standards.

Compatibility

Not only are there multiple emerging VEX standards (such as OpenVEX, CycloneDX, and CSAF), there are also multiple advisory formats (CVE, CSAF) and SBOM formats (CycloneDX, SPDX). Compatibility is a concern for project maintainers and open source users throughout the process of identifying and fixing project vulnerabilities. A developer may be obligated to use another standard and wonder if OSV can be used alongside it.

Fortunately, the answer is generally yes! OSV provides a focused, first-class experience for describing open source vulnerabilities, while providing an easy bridge to other standards.

CVE 5.0

The OSV team has directly worked with the CVE Quality Working Group on a key new feature of the latest CVE 5.0 standard: a new versioning schema that closely resembles OSV’s own versioning schema. This will enable easy conversion from OSV to CVE 5.0, and vice versa. It also enables OSV to contribute high quality metadata directly back to CVE, and drive better machine readability and data quality across the open source ecosystem.

Other emerging standards

Not all standards will convert as effortlessly as CVE to OSV. Emerging standards like CSAF are comparatively complicated because they support broader use cases. These standards often need to encode affected proprietary software, and CSAF includes rich mechanisms to express complicated nested product trees that are unnecessary for open source. As a result, the spec is roughly six times the size of OSV and difficult to use directly for open source.

OSV Schema's strong adoption shows that the open source community prefers a lightweight standard, tailored for open source. However, the OSV Schema maintains compatibility with CSAF for identification of packages through the Package URL and vers standards. CSAF records that use these mechanisms can be directly converted to OSV, and all OSV entries can be converted to CSAF.

SBOM and VEX standards

Similarly, all emerging SBOM and VEX standards maintain compatibility with OSV through the Package URL specification. OSV-Scanner today also already provides scanning support for the SPDX and CycloneDX SBOM standards.

OSV in 2023

OSV already provides straightforward compatibility with established standards such as CVE, SPDX, and CycloneDX. While it’s not clear yet which other emerging SBOM and VEX formats will become the standard, OSV has a clear path to supporting all of them. Open source developers and ecosystems will likely find OSV to be convenient for recording and consuming vulnerability information given OSV’s focused, minimal design.

OSV is not just built for open source, it is an open source project. We desire to build tools that will easily fit into your workflow and will help you identify and fix vulnerabilities in your projects. Your input, through contributions, questions, and feedback, is very valuable to us as we work towards that goal. Questions can be asked by opening an issue and all of our projects (OSV.dev, OSV-Scanner, OSV-Schema) welcome contributors.


Want to keep up with the latest OSV developments? We’ve just launched a project blog! Check out our first major post, all about how VEX could work at scale.

Starting in Chrome 111 we will begin to turn down the Chrome Cleanup Tool, an application distributed to Chrome users on Windows to help find and remove unwanted software (UwS).

Origin story

The Chrome Cleanup Tool was introduced in 2015 to help users recover from unexpected settings changes, and to detect and remove unwanted software. To date, it has performed more than 80 million cleanups, helping to pave the way for a cleaner, safer web.

A changing landscape

In recent years, several factors have led us to reevaluate the need for this application to keep Chrome users on Windows safe.

First, the user perspective – Chrome user complaints about UwS have continued to fall over the years, averaging out to around 3% of total complaints in the past year. Commensurate with this, we have observed a steady decline in UwS findings on users' machines. For example, last month just 0.06% of Chrome Cleanup Tool scans run by users detected known UwS.

Next, several positive changes in the platform ecosystem have contributed to a more proactive safety stance than a reactive one. For example, Google Safe Browsing as well as antivirus software both block file-based UwS more effectively now, which was originally the goal of the Chrome Cleanup Tool. Where file-based UwS migrated over to extensions, our substantial investments in the Chrome Web Store review process have helped catch malicious extensions that violate the Chrome Web Store's policies.

Finally, we've observed changing trends in the malware space with techniques such as Cookie Theft on the rise – as such, we've doubled down on defenses against such malware via a variety of improvements including hardened authentication workflows and advanced heuristics for blocking phishing and social engineering emails, malware landing pages, and downloads.

What to expect

Starting in Chrome 111, users will no longer be able to request a Chrome Cleanup Tool scan through Safety Check or leverage the "Reset settings and cleanup" option offered in chrome://settings on Windows. Chrome will also remove the component that periodically scans Windows machines and prompts users for cleanup should it find anything suspicious.

Even without the Chrome Cleanup Tool, users are automatically protected by Safe Browsing in Chrome. Users also have the option to turn on Enhanced protection by navigating to chrome://settings/security – this mode substantially increases protection from dangerous websites and downloads by sharing real-time data with Safe Browsing.

While we'll miss the Chrome Cleanup Tool, we wanted to take this opportunity to acknowledge its role in combating UwS for the past 8 years. We'll continue to monitor user feedback and trends in the malware ecosystem, and when adversaries adapt their techniques again – which they will – we'll be at the ready.

As always, please feel free to send us feedback or find us on Twitter @googlechrome.



We’re excited to announce changes that make getting Google Trust Services TLS certificates easier for Google Domains customers. With this integration, all Google Domains customers will be able to acquire public certificates for their websites at no additional cost, whether the site runs on a Google service or uses another provider. Additionally, Google Domains is now making an API available to allow for DNS-01 challenges with Google Domains DNS servers to issue and renew certificates automatically.



Like the existing Google Cloud integration, Automatic Certificate Management Environment (ACME) protocol is used to enable seamless automatic lifecycle management of TLS certificates. 



These certificates are issued by the same Certificate Authority (CA) Google uses for its own sites, so they are widely supported across the entire spectrum of devices used to access your services.



How do I use it?


Using ACME ensures your certificates are renewed automatically and many hosting services already support ACME. If you're running your own web servers / services, there are ACME clients that integrate easily with common servers. To use this feature, you will need an API key called an External Account Binding key. This enables your certificate requests to be associated with your Google Domains account. You can get an API key by visiting Google Domains and navigating to the Security page for your domain. There you’ll see a section for Google Trust Services where you can get your EAB Key.



Example of EAB Credentials in Google Domains



As an example, with the popular Certbot ACME client, the configuration to register an account looks like:


certbot register --email <CONTACT_EMAIL> --no-eff-email --server "https://dv.acme-v02.api.pki.goog/directory"  --eab-kid "<EAB_KEY_ID>" --eab-hmac-key "<EAB_HMAC_KEY>"




The EAB_KEY_ID and EAB_HMAC_KEY are both provided on your Google Domains security page.



After the account is created, you may issue certificates by running:

certbot certonly -d <domain.com> --server "https://dv.acme-v02.api.pki.goog/directory" --standalone



Then follow the prompts to complete validation and download your certificate. If you need additional information please visit the Google Domains help center.



Google Domains and ACME DNS-01



ACME uses challenges to validate domain control before issuing certificates. The ACME DNS-01 challenge can be an efficient way for users to automate the validation process and integrate with existing websites and web hosting services.



Google Domains now provides an API for ACME DNS-01 challenges that helps streamline the process for users to authenticate domain control quickly and securely. This is now offered in some popular ACME clients like Certbot via this plugin, Caddy, Certify The Web, Posh-ACME. You can find additional information on the Google Domains site.






Example of DNS API Access Token in Google Domains



To set up automatic certificate provisioning with ACME and DNS-01, follow these steps:



  1. Sign in to Google Domains.
  2. Select the domain that you want to use.
  3. At the top left, click “Menu” and select “Security”.
  4. Under section “ACME DNS API”, click “Create token”.
  5. A dialog box will appear with an “API Token”. This is the API Token you will need to enter into your ACME client. You will need to copy this value and can do so by clicking the copy button next to the API Token. 
    • NOTE: This value is only shown once. After the dialog box is closed you  will not be able to see this API Token again. Store this token in a safe place, since anyone that has it gains the ability to modify some DNS TXT records for your Domain.  
    • If you did not save this value before closing the dialog box, you can easily delete and create a new API token.
    • A limit of 10 API tokens per domain can exist at a time. 
  6. Once the dialog box is closed you will be able to see in the list that the token has been created. You can delete this token at any time to revoke its access. 
  7. The API token can now be used in an ACME client that supports the Google Domains ACME DNS API. Each ACME client differs slightly on how to specify this API Token so you will need to read the documentation on your desired ACME client. 




Regardless of which ACME client you use, Google Domains and Google Trust Services are excited to offer a reliable option for no-cost TLS certificates. This continues the mission of helping build a safer internet by providing a transparent, trusted, and reliable Certificate Authority.

1. Bring Chrome under Cloud Management

Your journey towards keeping your Google Workspace users and data safe, starts with bringing your Chrome browsers under Cloud Management at no additional cost. Chrome Browser Cloud Management is a single destination for applying Chrome Browser policies and security controls across Windows, Mac, Linux, iOS and Android. You also get deep visibility into your browser fleet including which browsers are out of date, which extensions your users are using and bringing insight to potential security blindspots in your enterprise.

Managing Chrome from the cloud allows Google Workspace admins to enforce enterprise protections and policies to the whole browser on fully managed devices, which no longer requires a user to sign into Chrome to have policies enforced. You can also enforce policies that apply when your managed users sign in to Chrome browser on any Windows, Mac, or Linux computer (via Chrome Browser user-level management) --not just on corporate managed devices.

This enables you to keep your corporate data and users safe, whether they are accessing work resources from fully managed, personal, or unmanaged devices used by your vendors.

Getting started is easy. If your organization hasn’t already, check out this guide for steps on how to enroll your devices.

2. Enforce built-in protections against Phishing, Ransomware & Malware

Chrome uses Google’s Safe Browsing technology to help protect billions of devices every day by showing warnings to users when they attempt to navigate to dangerous sites or download dangerous files. Safe Browsing is enabled by default for all users when they download Chrome. As an administrator, you can prevent your users from disabling Safe Browsing by enforcing the SafeBrowsingProtectionLevel policy.

Over the past few years, we’ve seen threats on the web becoming increasingly sophisticated. Turning on Enhanced Safe Browsing will substantially increase protection from dangerous websites, malicious downloads and extensions. For the best protections against web based attacks Google has to offer, enforce Enhanced Safe Browsing for your users.

3. Enable Enterprise Credential Protections in Chrome

Enterprise password reuse introduces significant security risks. Quite often, employees reuse corporate credentials as personal logins and vice versa. Occasionally, employees even enter their corporate passwords into phishing websites. Reused employee logins give criminals easy paths to access corporate data.

Chrome Enterprise Password Reuse detection helps enterprises avoid identity theft and employee and organizational data breaches by detecting when an employee enters their corporate credentials into any other website.

Google Password Manager in Chrome also has a built-in Password Checkup feature that alerts users when Google discovers a username and password has been exposed in a public data breach.

Password alerts are surfaced in Audit Logs and Security Investigation Tool which helps admins create automated rules or take appropriate steps to mitigate this by asking users to reset their passwords.

4. Gain insights into critical security events via Audit Logs, Google Security Center or your SIEM of choice

IT teams can gain useful insights about potential security threats and events that your Google Workspace users are encountering when browsing the web using Chrome. IT teams can take preventive measures against threats through Security Reporting.

In the Google Workspace Admin console, organizations can enroll their Chrome browser and get detailed information about their browser deployment. IT teams can also set policies, manage extensions, and more. The Chrome management policies can be set to work alongside any end user-based policies that may be in place.

Once you’ve enabled Security events reporting (pictured above), you can then view reporting events within audit logs. Google Workspace Enterprise Plus or Education Plus users can use the Workspace Security Investigation Tool to identify, triage, and act on potential security threats.

As of today, Chrome can report on when users:

  • Navigate to a known malicious site.
  • Download or upload files containing known malware.
  • Reuse corporate passwords on non-approved sites.
  • Change corporate passwords after reusing them on non-approved sites.
  • Install extensions.

In addition to Google Workspace, you can also export these events into other Google Cloud products, such as Google Cloud Pub/Sub, Chronicle, or leading 3rd party products such as Splunk, Crowdstrike and PaloAlto Networks.

5. Mitigate risk by keeping your browsers up to date with latest security updates

Modern web browsers, like any other software, can have "zero day" vulnerabilities, which are undiscovered flaws in the software that can be exploited by attackers until they are identified and resolved. Fortunately, among all the browsers, Chrome is known to patch zero day vulnerabilities quickly. However, to take advantage of this, the IT team has to ensure that all browsers within the browser environment are up-to-date. Our enterprise tools provide a smooth and seamless browser update process, enabling user productivity while maintaining optimal security. By leveraging these tools, businesses can ensure their users are safe and protected from potential security threats.

  • Version Report: Easily see all the versions of Chrome in your fleet across various operating systems in a daily report.
  • Force Auto Updates in Chrome: Trigger updates to newer versions of Chrome as soon as they’re available. Force users to relaunch Chrome to take updates more rapidly using enterprise policies. This keeps users on the latest version of Chrome, with the latest security fixes.
  • Controlling legacy browser usage: Some users continue to need access to old web applications that use plugins and ActiveX technology not supported by modern browsers. Legacy Browser Support functionality is integrated into Chrome, and reduces the time users spend with less secure browsers.

6. Ensure employees only use vetted extensions

Extensions pose a large security risk. Many extensions request powerful permissions that if misused, could lead to security breaches or data loss. However, due to strong end user demand, it’s often not possible to fully block the installation of extensions.

  • Apps & Extensions usage report: Provides visibility into every Chrome extension that is installed across an enterprise’s fleet. Admins can force install or block any extension across any segment of their fleet.
  • Extensions workflow: Admins can decide under which circumstances an extension install needs to be reviewed by IT. A review workflow in the Google Admin console makes it easy for admins to review and approve install requests for specific users requesting an extension, or for their broader fleet.
  • Extensions details: Admins can see additional details about an extension’s permissions, and other relevant metadata. This info is surfaced in the Extensions list and Extensions workflow pages to make it easier for administrators to manage extensions.

7. Ensure your Google Workspace resources are only accessed from Managed Chrome Browsers with protections enabled

Context-Aware Access ensures only the right people, under the right conditions, access confidential information. Using Context-Aware Access, you can create granular access control policies for apps that access Workspace data based on attributes, such as user identity, location, device security status, and IP address.

To ensure that your Google Workspace resources are only accessed from managed Chrome browsers with protection enabled, you create custom access levels in Advanced mode, using Common Expressions Language (CEL). Learn more about managed queries in this help center article.

8. Enable BeyondCorp Enterprise Threat and Data Protections

For the organizations that want to take an even more proactive approach to data security, they can deploy BeyondCorp Enterprise to protect their information and enable data loss prevention (including control over upload, download, print, save, copy and paste), real-time phishing protection, malware deep scanning, and Zero Trust access to SaaS applications. Since BeyondCorp Enterprise is already built into Chrome, organizations can frictionlessly implement it without having to install additional agents.

Learn more about how Google supports today’s workforce with secure enterprise browsing here.



Connected TV (CTV) has not only transformed the entertainment world, it has also created a vibrant new platform for digital advertising. However, as with any innovative space, there are challenges that arise, including the emergence of bad actors aiming to siphon money away from advertisers and publishers through fraudulent or invalid ad traffic. Invalid traffic is an evolving challenge that has the potential to affect the integrity and health of digital advertising on CTV. However, there are steps the industry can take to combat invalid traffic and foster a clean, trustworthy, and sustainable ecosystem.



Information sharing and following best practices
Every player across the digital advertising ecosystem has the opportunity to help reduce the risk of CTV ad fraud. It starts by spreading awareness across the industry and building a commitment among partners to share best practices for defending against invalid traffic. Greater transparency and communication are crucial to creating lasting solutions.


One key best practice is contributing to and using relevant industry standards. We encourage CTV inventory providers to follow the CTV/OTT Device & App Identification Guidelines and IFA Guidelines. These guidelines, both of which were developed by the IAB Tech Lab, foster greater transparency, which in turn reduces the risk of invalid traffic on CTV. More information and details about using these resources can be found in the following guide: Protecting your ad-supported CTV experiences.



Collaborating on standards and solutions
No single company or industry group can solve this challenge on their own, we need to work collaboratively to solve the problem. Fortunately, we’re already seeing constructive efforts in this direction with industry-wide standards.


For example, the broad implementation of the IAB Tech Lab’s app-ads.txt and its web counterpart, ads.txt, have brought greater transparency to the digital advertising supply chain and have helped combat ad fraud by allowing advertisers to verify the sellers from whom they buy inventory. In 2021, the IAB Tech Lab extended the app-ads.txt standard to CTV in order to better protect and support CTV advertisers. This update is the first of several industry-wide steps that have been taken to further protect CTV advertising. In early 2022, the IAB Tech Lab released the ads.cert 2.0 “protocol suite,” along with a proposal to utilize this new standard to secure server-side connections (including for server-side ad insertion). Ads.cert 2.0 will also power future industry standards focused on securing the supply chain and preventing misrepresentation.


In addition to these efforts, the Media Rating Council (MRC) also engaged with stakeholders to develop its Server-Side Ad Insertion and OTT (Over-the-Top) Guidance, which provides a consistent set of guidelines specific to CTV for organizations that seek MRC accreditation for invalid traffic detection and filtration. We’re also seeing key partners tackle this challenge through informal working groups. For example, we collaborated with various CTV and security partners across our industry on a solution that allows companies to ensure video ad requests are coming from a valid Roku device


But more work is needed. Players across the digital advertising ecosystem need to continue to build momentum through opportunities and initiatives that enable further collaboration on solutions.



Our ongoing investment in invalid traffic defenses
At Google, we’ve been defending our ad systems against invalid traffic for nearly two decades. By striking the right balance between automation and human expertise, we’ve developed a comprehensive set of measures to respond to threats like botnets, click farms, domain misrepresentation, and more. We’re now applying a similar approach to minimize the risk of CTV ad fraud, balancing innovation with tried-and-true technologies.


We’ve developed a machine learning platform built on TensorFlow, which has enabled us to expand the amount of inventory we can review and scale our defenses against invalid traffic to include additional surfaces, such as CTV. While machine learning has allowed us to better analyze ad traffic in new and diverse ways, we’ve also continued to leverage the work of research analysts and industry experts to ensure our automated enforcement systems are running effectively on CTV.


In addition to setting up new defenses for CTV, we’re also taking a more conservative approach with the CTV inventory we make available. This ensures that we aren’t exposing advertisers to unnecessary risk while CTV standards and best practices continue to evolve and mature, and while their adoption by the industry increases. 



Evolving and adapting
We know that bad actors continuously evolve and adapt their methods to evade detection and enforcement of our policies. The tactics behind invalid traffic and ad fraud will inevitably become more sophisticated with the growth of CTV. However, if the industry pulls together, we’ll be in a better position to not only address these new threats head on, but stay one step ahead of them while building a CTV advertising ecosystem that is safe and sustainable for everyone.

As Mobile World Congress approaches, we have the opportunity to have deep and meaningful conversations across the industry about the present and future of connected device security. Ahead of the event, we wanted to take a moment to recognize and share additional details on the notable progress being made to form harmonized connected device security standards and certification initiatives that provide users with better transparency about how their sensitive data is protected.

Supporting the GSMA Working Party for Mobile Device Security Transparency

We’re pleased to support and participate in the recently announced GSMA working party, which will develop a first-of-its-kind smartphone security certification program. The program will leverage the Consumer Mobile Device Protection Profile (CMD PP) specification released by ETSI, a European Standards Development Organization (SDO), and will provide a consistent way to evaluate smartphones for critical capabilities like encryption, security updates, biometrics, networking, trusted hardware, and more.

This initiative should help address a significant gap in the market for consumers and policy makers, who will greatly benefit from a new, central security resource. Most importantly, these certification programs will evaluate connected devices across industry-accepted criteria. Widely-used devices, including smartphones and tablets, which currently do not have a familiar security benchmark or system in place, will be listed with key information on device protection capabilities to bring more transparency to users.

We hope this industry-run certification program can also benefit users and support policy makers in their work as they address baseline requirements and harmonization of standards.As policy makers consider changes through regulation and legislation, such as the UK’s Product Security and Telecommunications Infrastructure Act (PSTI), and emerging regulation like the EU Cyber Security and Cyber Resilience Acts, we share the concerns that today we are not equipped with globally recognized standards that are critical to increased security across the ecosystem. We join governments in the call to come together to ensure that we can build workable, harmonized standards to protect the security of users and mobile infrastructure today and build the resilience needed to protect our future.

The Importance of Harmonized Standards for Connected Devices

Connected devices, not just smartphones, are increasingly becoming the primary touchpoint for the most important aspects of our personal lives. From controlling the temperature of your home, to tracking your latest workout – connected devices have become embedded in our day-to-day tasks and activities. As consumers increasingly entrust more of their lives to their connected devices, they’re right to question the security protections provided and demand more transparency from manufacturers.

After we participated in a recent White House Workshop on IoT security labeling, we shared more about our commitment to security and transparency by announcing the extension of device security assessments – which started with Pixel 3 and now includes Nest, and Fitbit hardware. We have and always will strive to ensure our newly released products comply with the most prevalent security baselines that are defined by industry-recognized standards organizations. We will also remain transparent about critical security features – like how long our devices will receive security updates and our collaboration with security researchers that help us identify and fix security issues to help keep users safe.

By participating in international standards and certification programs such as our work as a member of the Connectivity Standards Alliance (Alliance), we’re working to raise the bar for the industry and develop a consistent set of security requirements that users can rely on.


New Research Continues to Help Inform Our Efforts to Establish Strong Security Standards and Labeling Practices

Last year, the Alliance formed the Product Security Working Group (PSWG). Over the past nine months, the working group has been making terrific progress on its mission to build an industry-run certification program for IoT devices that aligns with existing and future regulatory requirements to reduce fragmentation and promote harmonization.

Today, the Alliance in partnership with independent research firm Omdia, published a comprehensive research report that outlines all of the currently published and emerging global IoT security regulations and the standards baselines they map to. This critical research enables PSWG to hone its focus and efforts on harmonizing between ETSI EN 303 645 and NIST IR 8425, as these two baseline security standards were found to underpin the vast majority of the regulations outlined in the research report.

The other notable area of the report highlighted the need for transparent security labeling for connected devices, which has also become a very important industry initiative. A large majority (77%) of consumers surveyed indicated a device label that explains the privacy and security practices of the manufacturer would be important or very important to their purchasing decision. Transparent security labeling is critical in helping consumers understand which devices meet specific security standards and requirements during evaluation. We recently provided our principles for IoT security labeling and will continue to be a key contributor to efforts around providing users with transparent device security labels.

Creating Strong Connected Device Security Standards Together

It’s been inspiring to see all of the progress that the Connectivity Standards Alliance, GSMA and the industry at large has made on security standards and labeling initiatives in such a short time. It’s even more exciting to see how much collaboration there has been between both industry and the public sector on these efforts. We look forward to continuing the conversation and coordinating on these important security initiatives with policymakers, industry partners, developers and public interest advocates to bring more security and transparency to connected device users.

It has been another incredible year for the Vulnerability Reward Programs (VRPs) at Google! Working with security researchers throughout 2022, we have been able to identify and fix over 2,900 security issues and continue to make our products more secure for our users around the world.

We are thrilled to see significant year-over-year growth for our VRPs, and have had yet another record-breaking year for our programs! In 2022 we awarded over $12 million in bounty rewards – with researchers donating over $230,000 to a charity of their choice.

As in past years, we are sharing our 2022 Year in Review statistics across all of our programs. We would like to give a special thank you to all of our dedicated researchers for their continued work with our programs - we look forward to more collaboration in the future!

Android and Devices

The Android VRP had an incredible record breaking year in 2022 with $4.8 million in rewards and the highest paid report in Google VRP history of $605,000!

In our continued effort to ensure the security of Google device users, we have expanded the scope of Android and Google Devices in our program and are now incentivizing vulnerability research in the latest versions of Google Nest and Fitbit! For more information on the latest program version and qualifying vulnerability reports, please visit our public rules page.

We are also excited to share that the invite-only Android Chipset Security Reward Program (ACSRP) - a private vulnerability reward program offered by Google in collaboration with manufacturers of Android chipsets - rewarded $486,000 in 2022 and received over 700 valid security reports.

We would like to give a special shoutout to some of our top researchers, whose continued hard work helps to keep Android safe and secure:

  • Submitting an impressive 200+ vulnerabilities to the Android VRP this year, Aman Pandey of Bugsmirror remains one of our program’s top researchers. Since submitting their first report in 2019, Aman has reported more than 500 vulnerabilities to the program. Their hard work helps ensure the safety of our users; a huge thank you for all of their hard work!
  • Zinuo Han of OPPO Amber Security Lab quickly rose through our program’s ranks, becoming one of our top researchers. In the last year they have identified 150 valid vulnerabilities in Android.
  • Finding yet another critical exploit chain, gzobqq submitted our highest valued exploit to date.
  • Yu-Cheng Lin (林禹成) (@AndroBugs) remains one of our top researchers submitting just under 100 reports this year.

Chrome

Chrome VRP had another unparalleled year, receiving 470 valid and unique security bug reports, resulting in a total of $4 million of VRP rewards. Of the $4M, $3.5 million was rewarded to researchers for 363 reports of security bugs in Chrome Browser and nearly $500,000 was rewarded for 110 reports of security bugs in ChromeOS.

This year, Chrome VRP re-evaluated and refactored the Chrome VRP reward amounts to increase the reward amounts for the most exploitable and harmful classes and types of security bugs, as well as added a new category for memory corruption bugs in highly privileged processes, such as the GPU and network process, to incentivize research in these critical areas. The Chrome VRP increased the fuzzer bonuses for reports from VRP-submitted fuzzers running on the Google ClusterFuzz infrastructure as part of the Chrome Fuzzing program. A new bisect bonus was introduced for bisections performed as part of the bug report submission, which helps the security team with our triage and bug reproduction.

2023 will be the year of experimentation in the Chrome VRP! Please keep a lookout for announcements of experiments and potential bonus opportunities for Chrome Browser and ChromeOS security bugs.

The entire Chrome team sincerely appreciates the contributions of all our researchers in 2022 who helped keep Chrome Browser, ChromeOS, and all the browsers and software based on Chromium secure for billions of users across the globe.

In addition to posting about our Top 0-22 Researchers in 2022, Chrome VRP would like to specifically acknowledge some specific researcher achievements made in 2022:

  • Rory McNamara, a six-year participant in Chrome VRP as a ChromeOS researcher, became the highest rewarded researcher of all time in the Chrome VRP. Most impressive is that Rory has achieved this in a total of only 40 security bug submissions, demonstrating just how impactful his findings have been - from ChromeOS persistent root command execution, resulting in a $75,000 reward back in 2018, to his many reports of root privilege escalation both with and without persistence. Rory was also kind enough to speak at the Chrome Security Summit in 2022 to share his experiences participating in the Chrome VRP over the years. Thank you, Rory!
  • SeongHwan Park (SeHwa), a participant in the Chrome VRP since mid-2021, has been an amazing contributor of ANGLE / GPU security bug reports in 2022 with 11 solid quality reports of GPU bugs earning them a spot on Chrome VRP 2022 top researchers list. Thank you, SeHwa!

Securing Open Source

Recognizing the fact that Google is one of the largest contributors and users of open source in the world, in August 2022 we launched OSS VRP to reward vulnerabilities in Google's open source projects - covering supply chain issues of our packages, and vulnerabilities that may occur in end products using our OSS. Since then, over 100 bughunters have participated in the program and were rewarded over $110,000.

Sharing Knowledge

We’re pleased to announce that in 2022, we’ve made the learning opportunities for bug hunters available at our Bug Hunter University (BHU) more diverse and accessible. In addition to our existing collections of articles, which support improving your reports and avoiding invalid reports, we’ve made more than 20 instructional videos available. Clocking in at around 10 minutes each, these videos cover the most relevant learning topics and trends we’ve observed over the past years.

To make this happen, we teamed up with some of your favorite and best-known security researchers from around the globe, including LiveOverflow, PwnFunction, stacksmashing, InsiderPhD, PinkDraconian, and many more!

If you’re tired of reading our articles, or simply curious and looking for an alternative way to expand your bug hunting skills, these videos are for you. Check out our overview, or hop right in to the BHU YouTube playlist. Happy watching & learning!


Google Play

2022 was a year of change for the Google Play Security Reward Program. In May we onboarded both new teammates and some old friends to triage and lead GPSRP. We also sponsored NahamCon ‘22, BountyCon in Singapore, and NahamCon Europe’s online event. In 2023 we hope to continue to grow the program with new bug hunters and partner on more events focused on Android & Google Play apps.

Research Grants

In 2022 we continued our Vulnerability Research Grant program with success. We’ve awarded more than $250,000 in grants to over 170 security researchers. Last year we also piloted collaboration double VRP rewards for selected grants and are looking forward to expanding it even more in 2023.

If you are a Google VRP researcher and want to be considered for a Vulnerability Research Grant, make sure you opted in on your bughunters profile.

Looking Forward

Without our incredible security researchers we wouldn’t be here sharing this amazing news today. Thank you again for your continued hard work!

Also, in case you haven’t seen Hacking Google yet, make sure to check out the “Bug Hunters” episode, featuring some of our very own super talented bug hunters.

Thank you again for helping to make Google, the Internet, and our users more safe and secure! Follow us on @GoogleVRP for other news and updates.

Thank you to Adam Bacchus, Dirk Göhmann, Eduardo Vela, Sarah Jacobus, Amy Ressler, Martin Straka, Jan Keller, Tony Mendez, Rishika Hooda, Medha Jain

A modern Android powered smartphone is a complex hardware device: Android OS runs on a multi-core CPU - also called an Application Processor (AP). And the AP is one of many such processors of a System On Chip (SoC). Other processors on the SoC perform various specialized tasks — such as security functions, image & video processing, and most importantly cellular communications. The processor performing cellular communications is often referred to as the baseband. For the purposes of this blog, we refer to the software that runs on all these other processors as “Firmware”.

Securing the Android Platform requires going beyond the confines of the Application Processor (AP). Android’s defense-in-depth strategy also applies to the firmware running on bare-metal environments in these microcontrollers, as they are a critical part of the attack surface of a device.

A popular attack vector within the security research community

As the security of the Android Platform has been steadily improved, some security researchers have shifted their focus towards other parts of the software stack, including firmware. Over the last decade there have been numerous publications, talks, Pwn2Own contest winners, and CVEs targeting exploitation of vulnerabilities in firmware running in these secondary processors. Bugs remotely exploitable over the air (eg. WiFi and cellular baseband bugs) are of particular concern and, therefore, are popular within the security research community. These types of bugs even have their own categorization in well known 3rd party exploit marketplaces.

Regardless of whether it is remote code execution within the WiFi SoC or within the cellular baseband, a common and resonating theme has been the consistent lack of exploit mitigations in firmware. Conveniently, Android has significant experience in enabling exploit mitigations across critical attack surfaces.

Applying years worth of lessons learned in systems hardening

Over the last few years, we have successfully enabled compiler-based mitigations in Android — on the AP — which add additional layers of defense across the platform, making it harder to build reproducible exploits and to prevent certain types of bugs from becoming vulnerabilities. Building on top of these successes and lessons learned, we’re applying the same principles to hardening the security of firmware that runs outside of Android per se, directly on the bare-metal hardware.

In particular, we are working with our ecosystem partners in several areas aimed at hardening the security of firmware that interacts with Android:

Bare-metal support

Compiler-based sanitizers have no runtime requirements in trapping mode, which provides a meaningful layer of protection we want: it causes the program to abort execution when detecting undefined behavior. As a result, memory corruption vulnerabilities that would otherwise be exploitable are now stopped entirely. To aid developers in testing, troubleshooting, and generating bug reports on debug builds, both minimal and full diagnostics modes can be enabled, which require defining and linking the requisite runtime handlers.

Most Control Flow Integrity (CFI) schemes also work for bare-metal targets in trapping mode. LLVM’s1 CFI across shared libraries scheme (cross-DSO) is the exception as it requires a runtime to be defined for the target. Shadow Call Stack, an AArch64-only feature, has a runtime component which initializes the shadow stack. LLVM does not provide this runtime for any target, so bare-metal users would need to define that runtime to use it.

The challenge

Enabling exploit mitigations in firmware running on bare metal targets is no easy feat. While the AP (Application Processor) hosts a powerful operating system (Linux) with comparatively abundant CPU and memory resources, bare metal targets are often severely resource-constrained, and are tuned to run a very specific set of functions. Any perturbation in compute and/or memory consumption introduced by enabling, for example, compiler-based sanitizers, could have a significant impact in functionality, performance, and stability.

Therefore, it is critical to optimize how and where exploit mitigations are turned on. The goal is to maximize impact — harden the most exposed attack surface — while minimizing any performance/stability impact. For example, in the case of the cellular baseband, we recommend focusing on code and libraries responsible for parsing messages delivered over the air (particularly for pre-authentication protocols such as RRC and NAS, which are the most exposed attack surface), libraries encoding/decoding complex formats (for example ASN.1), and libraries implementing IMS (IP Multimedia System) functionality, or parsing SMS and/or MMS.

Fuzzing and Vulnerability Rewards Program

Enabling exploit mitigations and compiler-based sanitizers are excellent techniques to minimize the chances of unknown bugs becoming exploitable. However, it is also important to continuously look for, find, and patch bugs.

Fuzzing continues to be a highly efficient method to find impactful bugs. It’s also been proven to be effective for signaling larger design issues in code. Our team partners closely with Android teams working on fuzzing and security assessments to leverage their expertise and tools with bare metal targets.

This collaboration also allowed us to scale fuzzing activities across Google by deploying central infrastructure that allows fuzzers to run in perpetuity. This is a high-value approach known as continuous fuzzing.

In parallel, we also accept and reward external contributions via our Vulnerability Rewards Program. Along with the launch of Android 13, we updated the severity guidelines to further highlight remotely exploitable bugs in connectivity firmware. We look forward to the contributions from the security research community to help us find and patch bugs in bare metal targets.

On the horizon

In Android 12 we announced support for Rust in the Android platform, and Android 13 is the first release with a majority of new code written in a memory safe language. We see a lot of potential in also leveraging memory-safe languages for bare metal targets, particularly for high risk and exposed attack surface.

Hardening firmware running on bare metal to materially increase the level of protection - across more surfaces in Android - is one of the priorities of Android Security. Moving forward, our goal is to expand the use of these mitigation technologies for more bare metal targets, and we strongly encourage our partners to do the same. We stand ready to assist our ecosystem partners to harden bare metal firmware.

Special thanks to our colleagues who contributed to this blog post and our firmware security hardening efforts: Diana Baker, Farzan Karimi, Jeffrey Vander Stoep, Kevin Deus, Eugene Rodionov, Pirama Arumuga Nainar, Sami Tolvanen, Stephen Hines, Xuan Xing, Yomna Nasser.

Notes


  1. LLVM - is a compiler framework used by multiple programming languages 

Should companies be responsible for cyberattacks? The U.S. government thinks so – and frankly, we agree.

Jen Easterly and Eric Goldstein of the Cybersecurity and Infrastructure Security Agency at the Department of Homeland Security planted a flag in the sand:

“The incentives for developing and selling technology have eclipsed customer safety in importance. […] Americans…have unwittingly come to accept that it is normal for new software and devices to be indefensible by design. They accept products that are released to market with dozens, hundreds, or even thousands of defects. They accept that the cybersecurity burden falls disproportionately on consumers and small organizations, which are often least aware of the threat and least capable of protecting themselves.”

We think they’re right. It’s time for companies to step up on their own and work with governments to help fix a flawed ecosystem. Just look at the growing threat of ransomware, where bad actors lock up organizations’ systems and demand payment or ransom to restore access. Ransomware affects every industry, in every corner of the globe – and it thrives on pre-existing vulnerabilities: insecure software, indefensible architectures, and inadequate security investment.

Remember that sophisticated ransomware operators have bosses and budgets too. They increase their return on investment by exploiting outdated and insecure technology systems that are too hard to defend. Alarmingly, the most significant source of compromise is through exploitation of known vulnerabilities, holes sometimes left unpatched for years. While law enforcement works to bring ransomware operators to justice, this merely treats the symptoms of the problem.


Treating the root causes will require addressing the underlying sources of digital vulnerabilities. As Easterly and Goldstein rightly point out, “secure by default” and “secure by design” should be table stakes.

The bottom line: People deserve products that are secure by default and systems that are built to withstand the growing onslaught from attackers. Safety should be fundamental: built-in, enabled out of the box, and not added on as an afterthought. In other words, we need secure products, not security products. That’s why Google has worked to build security in – often making it invisible – to our users. Many of our most significant security features, including innovations like SafeBrowsing, do their best work behind the scenes for our core consumer products.

There’s come to be an unfortunate belief that security features are cumbersome and hurt user experience. That can be true – but it doesn’t need to be. We can make the safe path the easiest, most helpful path for people using our products. Our approach to multi-factor authentication – one of the most important controls to defend against phishing attacks – provides a great example. Since 2021, we’ve turned on 2-Step Verification (2SV) by default for hundreds of millions of people to add an additional layer of security across their online accounts. If we had simply announced 2SV as an available option for people to enroll in, it would have failed like so many other security add-ons. Instead, we pioneered an approach using in-app notifications that was so seamless and integrated, many of the millions of people we auto-enrolled never noticed they adopted 2SV. We’ve taken this approach even further by building the “second factor” right into phones – giving people the strongest form of account security as soon as they have their device.

As for secure by design: We all have to shift our focus from reactive incident response to upstream software development. That will demand a completely new approach to how companies build products and services. We’ve learned a lot in the past decade about reengineering security architectures, and actively apply those learnings to keep people safe online every day. Ensuring technology is secure by design should be like balancing budgets — a part of business as usual. However, it isn’t easy to cut-and-paste solutions here: developers need to think deeply about the threats their products will face, and design them from the ground up to withstand those attacks. And the same principles are true for securing the development process as they are for users: the secure engineering choice must also be the easiest and most helpful one.

Building security into every stage of the software development process takes work, but recent innovations, like our SLSA framework for secure software supply chains, and new general purpose memory-safe languages, are making it easier. Perhaps most significantly, adopting modern cloud architectures makes it easier to define and enforce secure software development policies.

Persistent collaboration between private and public sector partners is essential. No company can solve the cybersecurity challenge on its own. It’s a collective action problem that demands a collective solution, including international coordination and collaboration. Many public and private initiatives — threat sharing, incident response, law enforcement cooperation — are valuable, but address only symptoms, not root causes. We can do better than just holding attackers to account after the fact.

As Easterly and Goldstein write, “Americans need a new model, one they can trust to ensure the safety and integrity of the technology that they use every hour of every day.” Again, we agree, but in this case we’d take it a step further. Building this model and ensuring it can scale calls for close cooperation between tech companies, standards bodies, and government agencies. But since technologies and companies cross borders, we also need to take a global view: Cybersecurity is a team sport, and international coordination is essential to avoid conflicting requirements that unintentionally make it harder to secure software. Broad regulatory cooperation on cybersecurity will promote secure-by-default principles for everyone. This approach holds enormous promise, and not just for technologically advanced nations. Raising the security benchmark for basic consumer and enterprise technologies that all nations rely on offers far more bang for the buck. A far wider range of countries and companies can take these simple steps than can employ advanced cyber initiatives like detailed threat sharing and close operational collaboration. Given the interdependent nature of the ecosystem, we are only as strong as our weakest link. That means raising cyber standards globally will improve American resilience as well.

Of course, raising the security baseline won’t stop all bad actors, and software will likely always have flaws – but we can start by covering the basics, fixing the most egregious security risks, and coming up with new approaches that eliminate entire classes of threats. Google has made investments in the past two decades, but contributing resources is just a piece of the puzzle. It's work for all of us, but it's the responsible thing to do: The safety and security of our increasingly digitized world depends on it.

Since launching in 2016, Google's free OSS-Fuzz code testing service has helped get over 8800 vulnerabilities and 28,000 bugs fixed across 850 projects. Today, we’re happy to announce an expansion of our OSS-Fuzz Rewards Program, plus new features in OSS-Fuzz and our involvement in supporting academic fuzzing research.

Refreshed OSS-Fuzz rewards

The OSS-Fuzz project's purpose is to support the open source community in adopting fuzz testing, or fuzzing — an automated code testing technique for uncovering bugs in software. In addition to the OSS-Fuzz service, which provides a free platform for continuous fuzzing to critical open source projects, we established an OSS-Fuzz Reward Program in 2017 as part of our wider Patch Rewards Program.

We’ve operated this successfully for the past 5 years, and to date, the OSS-Fuzz Reward Program has awarded over $600,000 to over 65 different contributors for their help integrating new projects into OSS-Fuzz.

Today, we’re excited to announce that we’ve expanded the scope of the OSS-Fuzz Reward Program considerably, introducing many new types of rewards!

These new reward types cover contributions such as:

  • Project fuzzing coverage increases
  • Notable FuzzBench fuzzer integrations
  • Integrating a new sanitizer (example) that finds two new vulnerabilities

These changes boost the total rewards possible per project integration from a maximum of $20,000 to $30,000 (depending on the criticality of the project). In addition, we’ve also established two new reward categories that reward wider improvements across all OSS-Fuzz projects, with up to $11,337 available per category.

For more details, see the fully updated rules for our dedicated OSS-Fuzz Reward Program.

OSS-Fuzz improvements

We’ve continuously made improvements to OSS-Fuzz’s infrastructure over the years and expanded our language offerings to cover C/C++, Go, Rust, Java, Python, and Swift, and have introduced support for new frameworks such as FuzzTest. Additionally, as part of an ongoing collaboration with Code Intelligence, we’ll soon have support for JavaScript fuzzing through Jazzer.js.

FuzzIntrospector support

Last year, we launched the OpenSSF FuzzIntrospector tool and integrated it into OSS-Fuzz.

We’ve continued to build on this by adding new language support and better analysis, and now C/C++, Python, and Java projects integrated into OSS-Fuzz have detailed insights on how the coverage and fuzzing effectiveness for a project can be improved.

The FuzzIntrospector tool provides these insights by identifying complex code blocks that are blocked during fuzzing at runtime, as well as suggesting new fuzz targets that can be added. We’ve seen users successfully use this tool to improve the coverage of jsonnet, file, xpdf and bzip2, among others.

Anyone can use this tool to increase the coverage of a project and in turn be rewarded as part of the refreshed OSS-Fuzz rewards. See the full list of all OSS-Fuzz FuzzIntrospector reports to get started.

Fuzzing research and competition

The OSS-Fuzz team maintains FuzzBench, a service that enables security researchers in academia to test fuzzing improvements against real-world open source projects. Approaching its third anniversary in serving free benchmarking, FuzzBench is cited by over 100 papers and has been used as a platform for academic fuzzing workshops such as NDSS’22.

This year, FuzzBench has been invited to participate in the SBFT'23 workshop in ICSE, a premier research conference in the field, which for the first time is hosting a fuzzing competition. During this competition, the FuzzBench platform will be used to evaluate state-of-the-art fuzzers submitted by researchers from around the globe on both code coverage and bug-finding metrics.

Get involved!

We believe these initiatives will help scale security testing efforts across the broader open source ecosystem. We hope to accelerate the integration of critical open source projects into OSS-Fuzz by providing stronger incentives to security researchers and open source maintainers. Combined with our involvement in fuzzing research, these efforts are making OSS-Fuzz an even more powerful tool, enabling users to find more bugs, and, more critically, find them before the bad guys do!