The analyst's AI governance framework, shown below, helps senior executives identify all potential risks posed by AI systems and provides a checklist of questions to ask tech vendors during the procurement process to ensure they implement AI responsibly.
Key Highlights
- Artificial intelligence (AI) technologies can improve productivity and profits for most businesses but also expose companies to risk. Potential issues range from copyright infringement to data privacy breaches and the risk of actual physical harm. Increased use of AI will also reinforce and exacerbate many of society’s biggest challenges, including bias, discrimination, misinformation, and other online harms.
- Companies that fail to adopt the highest standards of AI governance face substantial reputational and financial risk. For instance, in 2024, Google had to temporarily block its new AI image generation model after it inaccurately portrayed German Second World War soldiers as people of color. In 2023, iTutor Group paid $365,000 to settle a lawsuit after its AI-powered recruiting software automatically rejected applicants based on age.
- There are currently no global regulatory standards for AI, so it can be difficult for CEOs to know what constitutes best practice governance for AI systems. Instead, they are left by governments to voluntarily embed responsible AI values and practices into their AI strategy. Responsible AI is an approach to developing AI and managing AI-related risks from an ethical and legal perspective. Companies investing in responsible AI early will have an advantage over their competitors: they can show they are good corporate citizens while actively preparing for upcoming regulations.
Scope
- There is broad agreement among ethicists and tech advocates that responsible AI requires alignment with a set of internationally recognized principles. However, it is not always clear how to operationalize these principles, interpret them, or handle situations when conflicts arise between them.
- The analyst’s AI governance framework is based on five AI principles: transparency, accountability, safety, reliability, and social impact.
Reasons to Buy
- The journey towards responsible AI is complex and fraught with uncertainty. Risk can originate from different sources and multiply as AI systems are implemented. Our AI governance framework helps senior executives identify all potential AI risks within five broad classifications: transparency, accountability, safety, reliability, and social impact.
- if you are a senior executive in deploying AI systems designed by a third-party tech vendor, the onus is on you to ensure that your business is using AI responsibly. To help you, our AI governance framework provides a checklist of questions you should ask tech vendors to ensure that the AI systems they implement on your behalf follow a responsible AI approach.
Table of Contents
- Executive Summary
- The analyst's AI Governance Framework
- Breaking Down Our AI Governance Framework
- Timeline
- Checklist of AI Risks
- Checklist of Vendor Questions
- Glossary
- Further Reading
- Thematic Research Methodology