Last week at SAS Innovate in Las Vegas, SAS, a leading company in Data and AI, introduced new products and services designed to enhance AI governance and trust. These initiatives include model cards and AI Governance Advisory services aimed at assisting organizations in effectively managing AI risks and confidently achieving their AI objectives. SAS has also introduced a Trustworthy AI Life Cycle Workflow, aligned with the National Institute of Standards and Technology [NIST] AI Risk Management Framework.
Reggie Townsend, Vice President of SAS Data Ethics Practice, emphasized the company’s commitment to providing tools and guidance for responsible and ethical AI integration. He highlighted the goal of maximizing profitability while minimizing unintended harm through AI.
Model cards are a forthcoming feature in SAS® Viya® and are akin to ‘nutrition labels’ for AI models. They automatically generate detailed information about AI models, helping stakeholders understand and manage them more effectively. These cards are beneficial for everyone involved in the AI lifecycle, from developers to executives, facilitating compliance with global AI regulations. The cards detail crucial aspects such as model accuracy, fairness, and drift, alongside governance and usage information.
Eric Gao, Research Director at IDC, praised SAS for its practical approach to AI deployment, noting the importance of model cards in promoting transparency and monitoring AI projects.
The new AI Governance Advisory service from SAS will offer tailored guidance to help organizations navigate AI governance. This service, which has been pilot-tested with customers like PZU Insurance of Poland, focuses on identifying and mitigating potential risks associated with AI applications.
Steven Tiell, an experienced professional in ethical AI, has joined SAS as the Global Head of AI Governance. His background includes leading roles at Accenture and DataStax, where he focused on data ethics and responsible innovation.
Additionally, SAS’s Trustworthy AI Life Cycle Workflow provides a structured approach to implementing NIST’s AI management guidelines. It ensures models are fair and do not harm specific groups, incorporating steps like human-in-the-loop tasks to maintain accuracy over time. This workflow is accessible via SAS Model Manager Resources on Github and will soon be available through the NIST AI Resource Center.