By Samer Bohsali, Head of Social & Public Sector across the Middle East, and Julie Coffman, Global Chief Diversity Officer at Bain & Company.
Generative AI has rapidly sparked both immense enthusiasm and notable concerns. Stakeholders are keen to use this new technology for product enhancement, productivity, and competitiveness while grappling with ethical dilemmas, biases, data privacy, potential job losses, and evolving regulations. To address these challenges, companies need a comprehensive approach to AI responsibility, encompassing organizational actions like defining clear roles and responsibilities as well as technological strategies for model testing and monitoring.
Samer Bohsali, Head of Social & Public Sector across the Middle East
Julie Coffman, Global Chief Diversity Officer at Bain & Company
How does a comprehensive, responsible approach to AI help leading companies accelerate and amplify the value they get from the technology?
Bain’s AA Maturity Assessment 2023 found that companies with a comprehensive, responsible approach to AI earn twice as much profit from their AI efforts. These leaders aren’t afraid of possible risks, and they aren’t tentative about what they pursue and deploy. Rather, they quickly implement use cases and adopt sophisticated applications, accelerating and amplifying the value they get from AI. Importantly, they also identify the uses of AI they will not pursue until the technology develops further or their organization is mature enough to manage those uses.
While generative AI is recent, machine learning and AI run in the oldest pages of tech history. The financial sector pioneered model risk management, with the implementation of the US Office's Comptroller of the Currency’s guidance on model validation (2000) and model risk management (2011). These policies promoted robust model development, validation, and monitoring. Additionally, throughout the 2010s, tech giants like Google advanced machine learning testing and operations, expanding the understanding of how to enhance security, accuracy, and stability of such systems.
How can companies manage six specific system risks?
Beyond long-established risks such as bias, explainability, and malicious use, generative AI brings additional risks, including hallucinations, training data provenance, and ownership of output. Building on the experiences of the financial services and technology industries, organizations should make six commitments to managing AI system risks and need to span the most critical areas of risk across an organization within each application. The six commitments include: security and reliability, transparency and explainability, fairness and safety, privacy and ownership, society and environment, and accountability and compliance.
How can a company enable responsible AI?
A comprehensive approach to responsible AI has three components.
1. Aspirations and commitments: To demonstrate to their stakeholders including customers, employees, shareholders, investors, regulators, and communities, that they will be responsible stewards, companies must clearly explain how they intend to manage the risks from these new technologies. This starts with acknowledging the new and enhanced challenges—that they include not only technology questions but also equity and societal concerns, and that they require attention, disclosure, and communication.
Stakeholders expect companies to invest in secure, accurate, and unbiased systems. They expect these systems to be ethical and designed with potential future regulations and compliance requirements in mind. Of course, each organization will tune its commitments to its capabilities, potential exposures, and the specific requirements of its markets and location.
2. Governance processes, roles, and technology: Companies will need to augment existing approaches with new technology and practices that address the unique systems life cycle of AI solutions. For example, data governance and management practices will be required to cover new security, privacy, and ownership challenges. Roles, accountabilities, forums, and councils will all need to be revised and extended to effectively monitor these new systems and how they are used. This could include appointing a Chief AI Ethics Officer and/or an AI ethics council. After companies articulate their commitments, they need to ensure that the appropriate structures, policies, and technology are in place.
3. Culture: Given the broad impact, rapid advancement, and adoption of generative AI technologies, organization-wide training, and engagement covering their use—as well as the organization’s aspirations and commitments—will be needed. By ensuring these efforts are iterative, a company can nurture a culture of vigilance and learning that continuously improves its ability to use AI responsibly. Many companies will find value in establishing or updating a clear code of conduct, either through the adoption of a broad digital citizenship or data responsibility codes or through more specific codes of ethics for AI. These might include an AI-acceptable use policy that outlines specific dos and don’ts or defines the risk assessments required for individual AI use cases.
Modern AI systems are dynamic and complex to govern through manual efforts alone. Effective AI technology platforms and application development frameworks are vital to enabling the rapid development and deployment of AI technology while embedding controls required to deliver on responsible AI commitments.
Hallmarks of a responsible AI culture
For successful AI integration, it's crucial to embed responsibility into the company culture. This involves instilling responsible AI principles, ensuring leaders comprehend and manage risks, holding managers accountable for overseeing AI policies and governance, equipping team members with necessary resources and skills for responsible AI use, and actively engaging in dialogue with stakeholders about the risk-benefit balance.
This is a complicated terrain to navigate, but generative AI can’t be ignored. The scope of the technological and economic change it is likely to bring is just too great.
This site uses cookies: Find out more.