The nomenclature around AI impact assessments and its relationship to other evaluations of AI are unsettled. Some Organizations use “AI risk assessment” and “AI impact assessment” interchangeably, while others distinguish them from each other. There are disagreements between organizations that differentiate between AI risk and impact assessments regarding their relationship to each other. The term “AI impact assessment” lacks a standard definition. However, the National Institute for Standards in Technology (NIST) AI Risk Management Framework (RMF), which has gained traction in the global AI governance community, describes them as tasks that “include assessing and evaluating requirements for AI system accountability, combating harmful bias, examining impacts of AI systems, product safety, liability, and security, among others.” Organizations also disagree about how AI impact assessments intersect with other evaluations, such as data protection impact assessments (DPIAs). AI impact assessments are part of AI governance programs, which are the policies and procedures that aim to ensure the responsible development and deployment of AI technologies.
Assessments can help organizations build trust in their products and services, counter threats to the organizations, and comply with relevant laws. At a time when policymakers, businesses, and the public are directing significant attention to AI, assessments can be a vehicle for developing trust among various stakeholders. AI impact assessments can also provide organizations insight into how certain AI activities pose business risks, such as when employees input intellectual property (IP) into a third party’s AI tool. In addition to acting as a way to help companies manage existing risk, they can serve as a compass on decisions about whether to proceed with developing or procuring AI products. AI impact assessments are, therefore, increasingly part of regulators’ expectations around how organizations take on and manage AI-related risk.
Below are the findings from E Com Security Solutions’ research into this area. It identifies a set of emerging common steps in the AI impact assessment process and their implications at an organizational level.
INITIATING AN AI IMPACT ASSESSMENT: The circumstances that trigger an AI impact assessment will vary based on various factors, such as applicable law, product or service development, and an organization’s role within the AI ecosystem (e.g., developer v. deployer). These incentives for conducting AI impact assessments may arise at different points in the AI development and deployment lifecycle. For example, an organization may perform an initial evaluation during the development phase after substantial changes to the model or system are made. Additional assessments may be necessary when deployed in different contexts.
GATHERING MODEL-SYSTEM INFORMATION: Organizations must understand how that system works and was created to assess the risk that an AI system may pose. The list of common considerations below is not exhaustive. An organization that obtains a model or system from a third party may focus on the maturity of its AI governance framework and whether it has implemented controls to minimize certain risks. An organization may also direct less scrutiny towards the AI models and systems of more well-known third parties and those with whom the organization has a relationship. However, our research has observed that organizations tend to collect a baseline of relevant information in most situations. Examples of relevant information include:
- How was the model trained;
- What are the model or system’s use cases;
- Who are the end users, and where will the system be deployed and
- Organize potential use cases for the system by category (e.g., intended and unintended uses).
ASSESSING RISKS AND BENEFITS: Reviewers take the information about the model or system into account to determine the AI model or system’s risks and benefits for a particular use case. The type and level of risk present will inform what risk management strategies an organization implements to reduce the risk to within an acceptable threshold. Information about benefits will also educate an organization on how to proceed with an AI project. As part of this step, organizations may consider:
- The potential risks of harm and benefits associated with a particular use case;
- How does the generated list of system risks compare with the company’s taxonomy of risks (divided into categories like high, medium, and low) and
- If the assessment is being conducted during the development or post-development phase, including fine tuning during deployment, the existence and context of risks in system testing.
IDENTIFYING AND TESTING RISK MANAGEMENT STRATEGIES: Organizations select risk management strategies based on their responsiveness to a specific, identified risk.
For example, if an organization identifies hallucinations as a risk, it tailors its response to this risk, such as tweaking the AI’s implementation to reduce hallucinations and ensuring ongoing monitoring of hallucination outputs. Once an organization has identified risk management strategies, it can test their efficacy and balance the residual risk against the benefits to determine appropriate future steps. The organization may then record and operationalize the final decision, such as advancing the AI project to the next stage in its lifecycle.
E COM SECURITY SOLUTIONS’ – INCIDENT RESPONSE AND CYBER CRISIS MANAGEMENT
The E Com Security Solutions Cyber Range solution creates immersive simulations to guide your team through realistic breach scenarios, helping ensure you can respond and recover from enterprise-level cyber security incidents, manage vulnerabilities, and build a stronger security culture in your organization. The E Com Security Solutions Cyber Range virtual experiences provide immersive simulations to strengthen your organization’s cyber response, improve resilience, and fix vulnerabilities—anywhere in the world.
Increase preparedness with E Com Security Solutions’ assess, build, and test capabilities and our processes, plans, and playbooks that minimize the impact of cybersecurity incidents. Receive emergency incident response support such as forensic analysis, incident command, deep/dark web analysis, and skilled support from E Com Security Solutions.