A strong answer describes key observations from both charts, offers hypotheses (e.g., why certain functions have a higher proportion of risk), and implications of these observations (e.g. that the high proportion of unclassified/unclear cases requires clearer regulation). The following is an example of observations.
Top Level: This level encompasses uses that pose an unacceptable risk to people's safety, livelihoods, and rights. These use cases are prohibited unless specifically authorized by law for national security purposes. Examples include social scoring AI systems, behavior manipulation causing harm, and mass surveillance.
High-Risk Level: Use cases falling into this category require a conformity assessment before deployment in the market. The assessment evaluates data quality to minimize risks and discriminatory outcomes, documentation and traceability, transparency, user information provision, human oversight, robustness, accuracy, and cybersecurity provisions. The EU has defined a list of high-risk AI uses, such as access to employment, education, public services, management of critical infrastructures, safety components of vehicles, law enforcement, and administration of justice.
Limited Risk Level: This level pertains to uses with limited risk, and their only requirement is transparency obligations. For instance, AI-based chatbots must inform users that they are interacting with a machine.
Minimal Risk Level: Use cases with minimal risk do not have any obligations, but the adoption of voluntary codes of conduct is recommended. This voluntary adoption can build trust in AI adoption and provide a competitive advantage for service providers.
Unsurprisingly, more than 75% of AI systems in human resources are classified in the high-risk category, with more than 25% each in customer service, accounting and finance, and IT and security. On the other hand, unclear classifications are found in all enterprise functions, but mostly in accounting and finance at over 70%.
Possible reasons for High Risk share
1. Sensitive Data Handling: Human resources deal with an array of sensitive data, including personal information, employee performance evaluations, and health records. The usage of AI in HR processes can amplify data privacy concerns and potential harm if not handled with utmost care. High-risk AI systems in HR might involve automated hiring decisions or performance evaluations, which can lead to discriminatory outcomes and infringement on individuals' rights.
2. Financial Impact and Decision-Making: AI's application in accounting and finance often involves tasks like financial forecasting, investment analysis, and risk assessment. Inaccurate AI predictions or financial anomalies can have severe financial repercussions for businesses, investors, and individuals. Therefore, the potential risks involved in these critical financial decisions warrant a high-risk classification.
3. Vulnerability to Cyberattacks: The finance sector deals with substantial financial transactions and sensitive data. AI systems in this domain can be attractive targets for cybercriminals seeking to exploit vulnerabilities. Malicious manipulation of AI models in financial processes can lead to significant financial losses, making robustness and cybersecurity crucial considerations for risk assessment.
The risk class of an AI system profoundly affects the likelihood that it will be developed, adapted, or funded. High-risk systems tend to have a lower chance of being implemented due to the increase in cost and complexity, driven by the additional requirements that raise the barrier to adaptation.
Most high-risk systems are expected to be in human resources, customer service, accounting and finance, and legal. Consequently, fewer companies are likely to benefit from AI in these areas. The implications of this pattern warrant consideration for resource allocation and strategic planning in adopting AI technologies.
Moreover, unclear risk classifications slow down investment and innovation. Examining the causes of uncertainty can result in concrete recommendations to policymakers and companies to promote responsible AI innovation. Some of the main causes of unclear risk classifications could include:
1. Ambiguity in regulatory guidelines: Unclear or evolving regulatory frameworks may lead to varying interpretations and evaluations of AI systems' risk levels.
2. Lack of standardized evaluation criteria: Absence of universally accepted criteria for assessing AI risks could result in inconsistencies across assessments.
3. Complexity of AI algorithms: Advanced AI algorithms and deep learning models may pose challenges in understanding their decision-making processes, leading to uncertainty in risk evaluation.
4. Limited historical data: In some cases, AI systems may be exploring new territories, and the absence of extensive historical data might hinder accurate risk assessments.
5. Ethical considerations: Ethical dilemmas arising from AI applications, such as bias and fairness issues, may create uncertainty in assigning risk levels.
Addressing these underlying causes of uncertainty could pave the way for a more robust and streamlined AI regulation and adoption process, facilitating responsible and beneficial AI deployment across various industries.
Comments