Risk management for AI ethics can be a daunting activity for many companies. As a leader, you might be asking yourself practical questions like how do I get started, what do I have to include initially and what can I postpone until my company uses AI more broadly. A best approach is to start small but quickly. Plan to expand your risk management function as you make additional strategic decisions on AI integration and be prepared to revise your risk management program as your company’s use of AI evolves. Since AI as a theme is evolving so quickly, your program should be concise and flexible so you can deal with a changing landscape rapidly.
Risk Management frameworks generally include multiple components. For AI ethics governance (another name for risk management), I propose to businesses they include the following structural pillars in their program:
These pillars are the foundation of a governance program that can be followed by key stakeholders such as compliance, legal, product/service, sales, and operational areas of your organization.
Think of this as a road map. At a high level, what AI risks are you trying to manage? How from a strategic perspective do you want to do this? Who will own the governance and who will support the governance?
Document your basic answers to these questions. You can always modify them later.
Plan to perform an assessment of what AI risks you think need to be governed. You will want to rate each risk – for example, high, medium, low. Plan to perform this initially and then periodically. Some risks may need to be managed daily or monthly and others only annually.
Identify key stakeholders for each risk. What is the primary area developing the initial use of AI for a specific task/activity? Who will perform the activity, who is handling the testing and the validation? What organizational area will handle oversight – legal, compliance, operations area?
Assess the likely financial impact of each risk. You can use specific dollar ranges or categories like low, moderate, high.
Now that you have an overarching framework and a first round of risk assessment completed, you will want to get a general AI Use Policy statement in place. This does not need to be a multiple page statement to cover all situations, but should be an easy to understand statement that all your stakeholders, employees, and customers can read and have for future reference. How does your company want to position itself to the public about AI usage? This is in someways similar to a code of conduct policy or a sustainability policy. What is your company’s commitment on ethical use of AI?
Controls are processes put into place to “control” or mitigate the designated risk. A control might be a weekly review of testing and validation activities for an AI data model or a random manual sampling of emails generated by a chat tool such as ChatGPT uses for customer service responses. A sampling will help ensure that the responses meet your company’s quality standards and are consistent regardless of the customer segment. Each control identifies what the activity is and what its specific purpose is.
Procedures should be written and updated as needed to document how each control is executed, what is done if the control is executed incorrectly, and how it will be monitored/tested for effectiveness.
There may also be another layer of “desktop” procedures – detailing how specific tasks are performed and what the handoffs might be to other teams. These procedures are generally more granular that control procedures and provide the how-to steps for employees.
Identifying, documenting, and providing training is a key risk mitigation activity. If a company ensures AI practices are understood and adhered to by applicable employees, the company is better able to mitigate related risks.
For instance, new-hire training on procedures for using an AI chat tool can reduce rogue usage that might harm your customers.
Monitoring your risk program and specific activities ensures that your program aligns with the identified risk levels. If you have performance gaps in risk mitigation, monitoring will help identify these gaps, allowing you to develop plans to address them. Monitoring may also be required by customer contracts, outside governing bodies, or investors.
Reporting is a way to be transparent. It can serve as a tool for management to determine where to adjust how risk is managed. It can be a way to document AI usage for outside external governing bodies. It can provide confidence to board of directors that risks are well managed. Extracts of results can be shared with customers or communities that want proof of your transparency.
AI is a part of almost every business going forward. To support transparent and ethical use of AI, it is wise for your company to plan now how you will ensure your employees and third partner providers are using AI in support of socially responsible, non-discriminatory, and sustainable manner. Your governance program can be basic to start and then be expanded and customized to fit your company’s risk level. Managing AI ethics is a crucial component of a responsible, ethical company’s overall strategy to manage risk and support thoughtful growth and protection of its employees, customers, and stakeholders.
My company uses artificial intelligence in a transparent and honest manner. We support the use of AI in business with the use criteria including being socially beneficial, explainable, fair, and secure. We integrate AI tools into our business processes to enhance efficiency and decision-making and to improve quality.