Credo AI has developed the world's largest and most comprehensive AI Risk and Controls Library. This library aims to foresee and mitigate adverse incidents, streamlining the swift governance and safe deployment of AI systems.
"New standards such as ISO 42001 and NIST's new Generative AI risk profile demand precise and individualized governance of AI systems. As a leading AI governance platform, Credo AI is dedicated to ensuring our platform helps organizations adopt the latest Generative AI capabilities swiftly and with the utmost confidence and compliance. We are committed to continuously updating our controls library with the most relevant, risk-mitigating measures, enabling enterprises to embrace AI safely and confidently," said Navrina Singh, CEO and Founder of Credo AI.
Credo AI's expanded AI Risk and Controls Library allows users to pinpoint all relevant risks associated with specific AI tools or applications and access the necessary controls to mitigate these risks. Coupled with existing features that streamline Governance, Risk, and Compliance (GRC) for AI, this enhancement accelerates the AI governance process, enabling companies to become trustworthy, AI-powered leaders in their industries.
GenAI-specific risk scenarios and controls
In April of this year, NIST introduced a new standard for GenAI governance—the draft of the AI RMF GenAI Profile. Developed over the past year with contributions from over 2,500 members of the NIST generative AI public working group, including Credo AI, this profile extends NIST’s AI Risk Management Framework. It aims to help organizations identify the unique risks posed by generative AI and propose effective risk management strategies.
Building on these insights, Credo AI has augmented its platform with over 400 new GenAI-specific controls, expanding the Credo AI Risk and Controls Library to nearly 700 AI Risk Scenarios and corresponding controls.
As AI applications grow, use-case-specific governance becomes crucial. Low-risk AI applications can proceed quickly, while high-risk AI deployments must be carefully managed to prevent potential damages. By accessing a pre-built library of AI risks and controls, development teams can concentrate on creating innovative products while automating the essential tasks of AI governance, risk management, and compliance.
Not too long ago, Credo AI unveiled a suite of new features to strengthen its Governance, Risk, and Compliance (GRC) capabilities following the recent final European Parliament plenary vote on the EU AI Act.