Jump to Content
[go: nahoru, domu]

Security & Identity

Coalfire evaluates Google Cloud AI: ‘Mature,’ ready for governance, compliance

May 31, 2024
Jeanette Manfra

Senior Director, Global Risk & Compliance, Google Cloud

Nick Godfrey

Senior Director, Office of the CISO, Google Cloud

Try Gemini 1.5 models

Google's most advanced multimodal models in Vertex AI

Try it

At Google Cloud, we've long demonstrated our commitment to responsible AI development and transparency in our work to support safer and more accountable products, earn and keep our customers’ trust, and foster a culture of responsible innovation. We understand that AI comes with complexities and risks, and to ensure our readiness for the future landscape of AI compliance we proactively benchmark ourselves against emerging AI governance frameworks. 

To put our commitments into practice, we invited Coalfire, a respected leader in cybersecurity, to examine our current processes, measure alignment and maturity toward the objectives defined in the National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework (AI RMF) and International Organization for Standardization (ISO) ISO/IEC 42001 standard. Coalfire’s assessment provided valuable insights, allowing us to enhance our security posture as we continuously work to uphold the highest standards of data protection and privacy.

We believe that an independent and external perspective offers critical objectivity, and we are proud to be among the first organizations to perform a third-party AI readiness assessment.

At the heart of our approach is the relentless evaluation of our control systems designed to uphold the safe and secure design, development, and use of AI systems. We harness frameworks that help mitigate risks specific to AI systems (such as Secure AI Framework), as well as continuing to invest in developing a comprehensive risk management framework and governance structure. 

NIST AI RMF and ISO/IEC 42001

To ensure we meet requirements in emerging standards, we’ve built a cross-functional team that continuously collaborates with standards development organizations, such as NIST and ISO. In 2023, both NIST and ISO published guides for organizations looking to bolster their AI governance programs, including security, privacy, and risk management. The NIST AI RMF and ISO/IEC 42001 standard offer essential guidance for building systemic processes to manage potential AI risks and build trustworthy AI systems.

https://storage.googleapis.com/gweb-cloudblog-publish/images/image1_l7dWSOZ.max-1900x1900.jpg

Google Cloud’s comprehensive strategy for AI governance and risk management provides a valuable approach for addressing the unique challenges posed by AI, combining traditional and innovative methods. Here are three key best practices for establishing an effective AI governance framework and managing AI risks: 

1. Define clear AI principles

Establish guiding AI principles that articulate the foundational requirements, priorities, and expectations for the organization’s approach to developing AI. These principles should explicitly outline use cases that fall out of scope. The AI principles also offer a clear framework for consistently evaluating decisions and risks. 

For example, our AI principles help us set the tone for managing AI risks by focusing on evaluating potential harms, avoiding creating or reinforcing bias, and providing guidance on how to securely develop and deploy AI systems. The principles help us develop technology responsibly, and drive accountability and transparency

“When conducting the NIST AI RMF and ISO/IEC 42001 assessment, it became immediately apparent that Google has been working on responsible AI use and development for a long time. While gen AI has really hit the headlines in the last 18 months, Google’s AI principles date back more than a decade and provide a mature foundation for AI development,” said Ian Walters, AI risk assessor, Coalfire.

2. Use existing foundations

We found that augmenting current risk management processes to the needs of AI systems is a more effective approach than standing up entirely new ones. By leveraging strong existing foundations, organizations can evaluate and address AI-related risks inline with their risk tolerance and within the broader context of existing threats. This leads to a more holistic risk management strategy and consistent governance practices. 

The strategy of integrating AI risks into current risk management practices increases the visibility across the organization and is essential to comprehensively manage the specific risks associated with AI systems. For example, a strong security framework helps organizations evaluate the relevance of traditional controls and how they may need to be adapted or expanded to help cover AI systems.

3. Adapt to evolving landscape

Given the dynamic nature and complexity of AI technology, and the evolving regulatory landscape, the continuous evolution of AI risk management practices is imperative. Critical to this is building a multidisciplinary team that understands AI system development, implementation, monitoring, and validation. Additionally, risk assessments for AI systems need to be done in the context of the evolving product use cases, data sensitivity, deployment scenarios, and operational support. 

The approach for identifying, assessing, and mitigating risks across the full AI lifecycle must be prioritized and consistently maintained. New challenges and potential threats will emerge, making adaptability and vigilance top priorities to help ensure the safety and security of AI systems. 

Next steps

As AI frameworks and regulations continue to emerge and develop, we are working with governments, industry leaders, customers, and partners to ensure our AI systems meet the highest standards. 

Our commitment to frameworks such as NIST AI RMF and ISO/IEC 42001, and our engagement with independent assessors such as Coalfire, help us enhance our AI governance practices. In the process, we are committed to sharing our learnings, strategies, and guidance so we can collectively build and deliver responsible, secure, compliant, and trustworthy AI systems.

Posted in