Stakeholder Engagements and Assessment Service – Engages and organizes AI Governance steering committee. Develops committee charter and defines roles and responsibilities. Assess current and target compliance and risk requirements, engaging key stakeholders in the process. Develops a comprehensive AI Governance Roadmap reflecting the steps necessary to reach targeted maturity level. Ideally, your AI Governance roadmap will mirror AI Application development steps, Develops Impact Assessments to meet regulatory requirements. Organizes and develops charter for AI Governance Steering C
AI Governance Policies Services – Develops AI Governance policy detailing details how AI should be used, safeguards employed, and ensure compliance with regulatory requirements. Updates Privacy Policies, Records Retention/Data Retention Policies, and Data Security Classification policies as necessary.
AI Governance Process Development Services – Develops a series of governance processes including Regulatory Review Process, Data Provenance Process, Privacy and Sensitive Information Review Process, Ethical Use Review Process and AI Accuracy, Correctness and Safety Review Process
AI Governance Process Execution, Monitoring and Remediation Services – Assist in kicking off AI process during development on a scheduled, regular, ongoing basis after launch. Established review cycle that drives any needed updates or remediation.
Pressure for companies to use AI to gain a competitive advantage (or at least not fall behind versus competitors) is steadily rising, and in 2024, CEOs will push their Legal,...
The advent of generative artificial intelligence (AI) offers the promise of tremendous leaps in productivity, new revenue, cost savings, and increased innovation. After decades of technological stagnation with respect to AI, generative AI has elevated it from the fringes to the mainstream. AI deployment will initially be piece meal, with AI-assisted applications launched in legal, finance, marketing, product design, engineering, and eventually in nearly every single area of the organization. Without overstating, AI has the potential to be transformative.
Yet AI’s tremendous benefits are being met with an almost equal concern on its risks. Regulators have been rushing to enact laws restricting how it can be used. Furthermore, the courts are only beginning to evaluate AI’s copyright and intellectual property impacts. Finally, AI also raises significant questions on its ethical use as well as correctness and safety.
Contoural’s approach to AI Governance focuses on the following areas:
Generative AI’s explosive adoption has been met with a quick response from regulators. Every week governments across the world are proposing restrictions on how and where this new technology can be used. Wanting to become the global standard, European regulators announced restrictions on how AI can use information about individuals, as well as overall safeguards, especially around the use of personal information. In the U.S., states are limiting how companies can use AI to make financial decisions such as loan approvals. The Biden administration created a new standard for safety and security to protect privacy and civil rights. It can be argued that regulators are competing with each other and rushing to develop regulations, in hopes of setting a global regulatory standard. These new regulations are just the beginning, as we expect to see many countries and states developing new rules limiting AI this year.
In addition to new AI regulations, there are significant copyright and intellectual property concerns around AI. There are concerns that some “closed” AI Large Language Models from commercial vendors have been trained with copyrighted data. Furthermore, as these are closed systems, it is not possible to inspect what training data was used. Other generative AI vendors such as Adobe have gone out of their way to ensure that their products are based exclusively on fully licensed training data, even going as far as offering indemnification for copyright infringement claims for the users of their products.
In addition to the risk of misusing others IP, AI users also need to be careful about compromising their own IP or other sensitive data. For example, the Economist Magazine reported last year that Samsung employees unintentionally leaked proprietary source code via ChatGPT. An unprotected disclosure of a trade secret to a third party – through an AI assisted application – vitiates the status of the information as a trade secret.
AI systems are susceptible to the biases from the “training data” used to build the system’s intelligence, risking that they may lead to unethical actions. For example, if an HR application looking for candidates “teaches” an AI system to look for job candidates based on historical hiring profiles that do not reflect a company’s diversity goals, it may have an unintended bias. In this example, if the system is fed predominantly white males as examples to be used as the basis of “ideal” employees, the AI system may inadvertently only recommend white male candidates. In addition to bias, in the rush to deploy AI companies need to ensure they are following their other established ethical practices, including transparency and accountability.
Finally, “naïve” AI systems want to please and can sometimes false information. Generative AI constructs information. There have been some cases recently where AI systems “constructed” fake legal cases, which were submitted to the court. Additionally, AI systems can provide incorrect or unsafe advice. Many accuracy, correctness and safety issues are more of an example of poor AI governance instead of any inherent failure of AI.
Despite these risks and concerns, IT and legal departments will face tremendous pressure to deploy AI applications. Organizations may lose an advantage sitting on the sidelines.
Launching an AI-assisted application with risks and concerns or not using AI at all is a false dilemma. Today companies are successful using AI compliantly, limiting legal risks, ethically and correctly through an AI Governance program. Good AI Governance not only drives effective program development, but it also saves time, allowing AI applications to move into production more quickly.
While it may be tempting to try and develop a program with a small group of stakeholders, this may slow down or even halt program development. As a first step, needs should be assessed and socialized with a larger group of stakeholders early in the process. Tasks include:
Assessment and roadmap development – Assess current and targeted compliance and risk requirements, engaging key stakeholders in the process. Determining what needs to be done and at what level can speed up these types of complex projects.
Impact assessment creations– Some jurisdictions may specifically require an AI impact assessment focusing on people and organizations.
Organizing an AI Governance steering committee – AI governance is complex, requiring the participation of various stakeholders, including compliance, risk, legal, privacy, information governance, data governance, IT, and business functions. This committee should be formed early in the process to ensure both that all risks and requirements are covered and that each group feels a sense of “buy in” to the process.
Roles and responsibilities – Many AI governance functions will require effort from groups. Establishing ongoing roles and responsibilities early ensures that AI governance will be an ongoing, continual process and not a one-time exercise.
Successful AI governance requires the creation of a new policy and possibly updates of existing policies.
Create AI Governance Policy – The next step is developing and updating AI Governance policies. An AI Governance Policy sets out the compliant, transparent, and ethical use of AI for the organization. It details how AI should be used, safeguards employees, and ensures compliance with regulatory requirements. This is your overall “guiding light” to demonstrate to others you are using AI responsibly.
Update Data retention/records retention policy and schedule – Organizations may need to update their data retention policies or records retention schedules to avoid older legacy data with sensitive or incorrect information “polluting” the development of AI systems.
Data security classification policies – A data security classification policy classifies information based on privacy, confidentiality, intellectual property, and other sensitivity factors. It may need to updated to ensure that appropriate controls are placed on sensitive.
Privacy policies – Many AI regulatory restrictions center on the use of personal information. Privacy policies need to be synchronized with AI governance policies.
Having up-to-date policies provides defensibility if an AI system faces review from a regulator. These policies demonstrate the organization is mindful in its use of AI and is diligent in its compliance efforts.
AI requires the development of governance processes. These processes will come into play both during development and during ongoing deployment.
Regulatory review process – AI regulatory requirements are being announced seemingly every week. Organizations need to develop a process for monitoring regulatory changes to ensure their systems comply with any new rules.
Data provenance process – AI systems leverage both “training data” used by large language models and supplementary information used as part of retrieval augmented generation. Companies initially need to undertake reasonable due diligence to ensure this input data is not copyrighted or, if it is, that they have the right to use this information. Furthermore, as this input data is often refreshed, ascertaining provenance must occur periodically.
Privacy and sensitive information review process – In addition to ensuring that input data is not copyrighted, organizations should develop a process to ensure the AI does not contain either personal or other types of sensitive information such as trade secrets or corporate confidential information.
Ethical use review process – In addition to compliance, AI systems need to produce ethical results. For example, a visual generative AI application when asked to create a picture of “senior executives” should not consistently create an image exclusively consisting of older white males. The AI output should be evaluated to ensure it is being used ethically and reflects an organization’s values.
AI accuracy, correctness, and safety review process – In addition to compliance, legal assuredness, and accuracy, AI needs to be accurate, correct, and safe. AI’s polished output can lull a user to believe that all the information it produces is correct and accurate. Correctness and accuracy need to be tested both throughout development and on an ongoing basis. Additionally, AI also needs to be tested for safety to ensure it is not being misused.
All AI governance processes should be completed regularly, and the results should be retained.
Once launched, AI systems need to be monitored. Any issues, discrepancies, or problems should be noted, along with steps taken to remediate these issues.
Initial and ongoing process execution –Ensure all processes are enacted during and after launch.
Ongoing monitor – Monitoring and reviewing the results of the processes.
Updates and remediation – Updating the system and/or approach as compliance and other rules change. Remediating any issues encountered, and documenting the actions taken.
In the event of a regulatory inquiry, being able to readily communicate what you intended to do (policies), how you intended to ensure you were doing it (processes), and how you addressed issues when they arose will demonstrate compliance and make the system more defensible.
A strong AI governance program that is developed early speeds up overall deployment. By identifying compliance requirements and anticipating risks during system development it can avoid having to redesign or rework the system on the tail end. Likewise, good governance engages key stakeholders early on, allowing them to both raise concerns as well as become comfortable with chosen approaches. While a bit counterintuitive, creating AI Governance drives faster development times.
Once launched, how to minimize the impact of new regulatory requirements or other challenges. In other words, how can organizations create an AI Governance function today that does not have to be updated every time a new rule is announced? In reality, many of the global requirements address similar and common areas. Designing an AI governance functions toward these common requirements will enable companies to become “AI Agile,” requiring minimal changes.
This new, complex technology faces a chaotic legal and regulatory environment. However, good AI Governance follows established compliance and risk reduction strategies, even if applied in this near area. Fear not AI. Instead with a smart approach embrace the technology and profit from it.
Contoural is the largest independent provider of strategic Information Governance, Privacy, and AI Governance consulting services, including records and information management, litigation readiness and control of privacy and sensitive information.
Copyright 2024. All Rights Reserved