Pillars of AI Ethics Risk Management - Get Started on Managing this New Risk

June 19, 2024

Risk management for AI ethics can be a daunting activity for many companies. As a leader, you might be asking yourself practical questions like how do I get started, what do I have to include initially and what can I postpone until my company uses AI more broadly. A best approach is to start small but quickly. Plan to expand your risk management function as you make additional strategic decisions on AI integration and be prepared to revise your risk management program as your company’s use of AI evolves. Since AI as a theme is evolving so quickly, your program should be concise and flexible so you can deal with a changing landscape rapidly.


Risk Management frameworks generally include multiple components. For AI ethics governance (another name for risk management), I propose to businesses they include the following structural pillars in their program:


  • Governance Framework
  • Risk Assessment
  • AI Use Policy
  • Controls
  • Control Procedures
  • Training
  • Monitoring
  • Reporting


These pillars are the foundation of a governance program that can be followed by key stakeholders such as compliance, legal, product/service, sales, and operational areas of your organization.


Governance Framework


Think of this as a road map. At a high level, what AI risks are you trying to manage? How from a strategic perspective do you want to do this? Who will own the governance and who will support the governance?

Document your basic answers to these questions. You can always modify them later.


Risk Assessment


Plan to perform an assessment of what AI risks you think need to be governed. You will want to rate each risk – for example, high, medium, low. Plan to perform this initially and then periodically. Some risks may need to be managed daily or monthly and others only annually.

 Identify key stakeholders for each risk. What is the primary area developing the initial use of AI for a specific task/activity? Who will perform the activity, who is handling the testing and the validation? What organizational area will handle oversight – legal, compliance, operations area?

 Assess the likely financial impact of each risk. You can use specific dollar ranges or categories like low, moderate, high.


AI Use Policy


Now that you have an overarching framework and a first round of risk assessment completed, you will want to get a general AI Use Policy statement in place. This does not need to be a multiple page statement to cover all situations, but should be an easy to understand statement that all your stakeholders, employees, and customers can read and have for future reference. How does your company want to position itself to the public about AI usage? This is in someways similar to a code of conduct policy or a sustainability policy. What is your company’s commitment on ethical use of AI?


Controls


Controls are processes put into place to “control” or mitigate the designated risk. A control might be a weekly review of testing and validation activities for an AI data model or a random manual sampling of emails generated by a chat tool such as ChatGPT uses for customer service responses. A sampling will help ensure that the responses meet your company’s quality standards and are consistent regardless of the customer segment. Each control identifies what the activity is and what its specific purpose is.


Control Procedures


Procedures should be written and updated as needed to document how each control is executed, what is done if the control is executed incorrectly, and how it will be monitored/tested for effectiveness.


There may also be another layer of “desktop” procedures – detailing how specific tasks are performed and what the handoffs might be to other teams. These procedures are generally more granular that control procedures and provide the how-to steps for employees.


Training


Identifying, documenting, and providing training is a key risk mitigation activity. If a company ensures AI practices are understood and adhered to by applicable employees, the company is better able to mitigate related risks.

For instance, new-hire training on procedures for using an AI chat tool can reduce rogue usage that might harm your customers.


Monitoring


Monitoring your risk program and specific activities ensures that your program aligns with the identified risk levels. If you have performance gaps in risk mitigation, monitoring will help identify these gaps, allowing you to develop plans to address them. Monitoring may also be required by customer contracts, outside governing bodies, or investors.


Reporting


Reporting is a way to be transparent. It can serve as a tool for management to determine where to adjust how risk is managed. It can be a way to document AI usage for outside external governing bodies. It can provide confidence to board of directors that risks are well managed. Extracts of results can be shared with customers or communities that want proof of your transparency.


Conclusion


AI is a part of almost every business going forward. To support transparent and ethical use of AI, it is wise for your company to plan now how you will ensure your employees and third partner providers are using AI in support of socially responsible, non-discriminatory, and sustainable manner. Your governance program can be basic to start and then be expanded and customized to fit your company’s risk level. Managing AI ethics is a crucial component of a responsible, ethical company’s overall strategy to manage risk and support thoughtful growth and protection of its employees, customers, and stakeholders.

By Carolyn Schrader January 22, 2025
Some splendid tech trends are going on. A research paper, 2025 Game Changers , was published by one of the news sources I follow - CB Insights https://www.cbinsights.com/newsletter/ . It was first published in September 2024 and listed again in January 2025. If you missed either, here are some of their insights for game changers in 2025. 1. AI weather prediction 2. Ultra-deep drilling 3. AI agent marketplaces 4. Advanced nuclear propulsion 5. Biocomputing 6. Brain manipulation tech 7. Quantum-optimized portfolio 8. Cellular & epigenetic reprograming 9. GPS-less navigation systems AI Weather Prediction Both local and global extreme weather may be more predictable with AI models. The World Economic Forum identified the #1 long-term global risk over the next 10 years to be extreme weather events. AI models are improving and may supplement or replace traditional physics-based models. AI Agent Marketplaces AI agents will likely become “plug and play” and interface with both internal and external agents of a company or organization. Simply, an AI agent is a software program that uses AI to perform its designated task or activity. An agent might be using sensors to gather data, use its knowledge to make a decision, or provide information when asked a question/prompt. Brain Manipulation Tech Research is underway to develop brain-computer interfaces/devices that will enable communication between the brain and an external device/computer. AI will work on the interpretation of brain signals. This technology could help motor-impaired patients. It may lead to the ability to learn more about how the brain operates with depression, ADHD, and more.  GPS-less navigation systems We generally take the benefits of GPS for granted, in many everyday activities. There is an increase threat to malicious interfacing with this technology. So researchers are working on alternatives. New systems are AltPNT – Alternative Positioning, Navigation, & Timing. A few options include 1) alternative space-based – commercial satellites at lower orbits than those uses for GPS, 2) Non-radio frequency system, such as magnetic, and 3) multi-sensor fusion which includes integration of several sources such as lidar and cellular systems.
By Carolyn Schrader January 9, 2025
A Quick Summary of Key Scientific Uses of AI in 2024 
By Carolyn Schrader January 9, 2025
2025 – the Year of AI (again) 
More Posts
Share by: