Section 53 of the EU Artifical Intelligence Act (2024)
Today's post is going to get into some of the technical parts of the EU Artificial Intelligence Act 2024 (I will refer to it as The Act), in particular Section 53 which carves out a series of approved uses even is areas of high risk. If you are in the EU or want your organisation's AI use to be compatible in case your country lines up with the EU standard then this post might be useful to you.
My thanks to Ivett Bene who shared The Act with me after my post about Australian guidance for AI Implementations.
But hold on! Why is little old Stu down in sunny Australia writing a post about the European AI legislation? Good question.
The EU Act quite succinctly shines a spotlight on a trend I have seen across guidance from multiple government bodies and regulators, and that is on the area of decision making and how AI relates to it. For example, the Policy for the Responsible Use of AI in Government emphasizes that AI has the potential to improve data-driven decisions, enhancing productivity and policy outcomes. However it and other guidance documents all point out the inherent danger of AI to overtly influence decision making with biased data, incomplete summaries, and in some cases even intentional fraud. They rightly push for human centred AI approaches which is a very good thing. We want to avoid negative outcomes from the use of AI, however, these protections need to be based in evidence and reason, not fear, and in the EU act it seems to go a bit further, suggesting almost any AI system that influences decision making is automatically high risk (read on below).
Labelling all decision focused AI high risk has big impacts for Knowledge Management which has spent 30 years looking at AI and other technologies to try and bring the right knowledge to the right person at the right time so they can make better decisions.
It's almost our whole game when you think about it like that.
So let's take a few moments to look at the act and understand what is going on and at least for our European friends where there are four carve-outs they might be able to take advantage of for better KM.
The EU Artificial Intelligence Act
First drafted in 2021 and entered into force on 1st August, 2024, this act has been in the news recently with some industry pundits concerned it is legislating technology that is already out of date, and has definitional challenges. EuroNews and KPMG both say it will create governance burdens that will lead to significant inhibition of innovation as well as legal power-struggles.That said, it also had some wins, including added exemptions for doing scientific research with AI, for the entire development process and for the open-source sector in Article 2, transformed the rigid high-risk obligations via Article 8 into flexible principles that take the context of the deployment of an AI system into account and only require what is technically feasible, and the concrete obligations in Article 9 - 15 have also been heavily improved; something that we should all benefit from.
Included an obligation that will accelerate information sharing along the AI value chain between different market actors with Article 28 in order to enable downstream providers and deployers to become compliant with the AI Act.
The Act includes a huge number of negative use cases and in Article 6 defines a number of High Risk areas where AI Systems can do significant harm to citizens health, safety or fundamental rights. Any article meeting the following conditions should be considered High-Risk: (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;(b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I. Furthermore, according to these rules, Annex III covers a set of pre-defined high risk areas.
ANNEX III: High-risk AI systems referred to in Article 6(2)
High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas. (Click on each one to expand)
1. Biometrics
(a) Remote biometric identification systems.
This shall not include AI systems intended to be used for biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be;
(b) AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics;
(c) AI systems intended to be used for emotion recognition.
2. Critical Infrastructure
AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating, or electricity.
3. Education and Vocational Training
(a) AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels;
(b) AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels;
(c) AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels;
(d) AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions at all levels.
4. Employment, Workers’ Management, and Access to Self-employment
(a) AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;
(b) AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics, or to monitor and evaluate the performance and behaviour of persons in such relationships.
5. Access to and Enjoyment of Essential Private Services and Essential Public Services and Benefits
(a) AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud;
(c) AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance;
(d) AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters, and medical aid, as well as emergency healthcare patient triage systems.
6. Law Enforcement
(a) AI systems intended to be used by or on behalf of law enforcement authorities to assess the risk of a natural person becoming the victim of criminal offences;
(b) AI systems intended to be used by law enforcement authorities as polygraphs or similar tools;
(c) AI systems intended to be used to evaluate the reliability of evidence in the course of the investigation or prosecution of criminal offences;
(d) AI systems intended to be used to assess the risk of a natural person offending or re-offending not solely on the basis of the profiling of natural persons;
(e) AI systems intended to be used for the profiling of natural persons in the detection, investigation, or prosecution of criminal offences.
7. Migration, Asylum, and Border Control Management
(a) AI systems intended to be used by public authorities for polygraphs or similar tools;
(b) AI systems intended to assess a risk, including a security risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
(c) AI systems intended to assist competent public authorities in the examination of applications for asylum, visa, or residence permits;
(d) AI systems intended for detecting, recognising, or identifying natural persons in migration, asylum, or border control management.
8. Administration of Justice and Democratic Processes
(a) AI systems intended to assist a judicial authority in researching and interpreting facts and the law;
(b) AI systems intended to influence the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda.
So what is Section 53 about?
Now we know that a large percentage of ever-day use cases are considered High Risk by The Act, after a long list of negative use cases, Section 53 outlines some exceptions where AI can be considered, even in high risk areas. The full text of Section 53 is below. I will try and summarise it as best I can (I am not a lawyer and I advise you to seek legal advice if you are planning to implement AI in one of the high-risk areas). There are four conditions where you might be allowed to operate AI solutions in these High Risk areas.
What are the four conditions:
- The AI performs very narrow, or single tasks, ie: classifying documents
- Improving the result of a human activity
- Test decision-making patterns of humans, ie: quality control
- Prepatory work, ie: indexing, summarization of data, searching, speech processing or data linking.
That said, traceability and accountability are paramount. The Act requires organisations to assess and report on the decision before the product is used or sold. The organisation should be able to produce this documentation immediately when requested. You will also be obliged to register your solution in the EU database mentioned in the Act.
Impact on decision making
The big issue here for Knowledge Managers is the overall reluctance to impact human decision making in any meaningful way; something knowledge managers are largely trying to do through human centered AI decision support, be that coordinating via agents, recalling and collating informaiton from various data stores, or creating AI-based expert systems for subject matter advice for improved local decision making like I spoke about in this post. While there do seem to be some carve-outs in section 53 for rudimentary pre- and post-processing around decisions, if any of your AI solutions or products involve decisions making, I think it would be wise for your to seek legal advice. If this is you, I would love you to share you experience as we all find out how the new AI Act is going to work on the ground.
Conclusion
So what do you think? Is there a concern here for KM? Legally is there more to the act that I have missed that balances Section 53 and Article 2 and 6 out? Are we at risk of legally forcing AI to only be used for information processing, not knowledge and expertise? I would love your feedback.
Section 53 Full Text
It is also important to clarify that there may be specific cases in which AI systems referred to in pre-defined areas specified in this Regulation do not lead to a significant risk of harm to the legal interests protected under those areas because they do not materially influence the decision-making or do not harm those interests substantially. For the purposes of this Regulation, an AI system that does not materially influence the outcome of decision-making should be understood to be an AI system that does not have an impact on the substance, and thereby the outcome, of decision-making, whether human or automated. An AI system that does not materially influence the outcome of decision-making could include situations in which one or more of the following conditions are fulfilled. The first such condition should be that the AI system is intended to perform a narrow procedural task, such as an AI system that transforms unstructured data into structured data, an AI system that classifies incoming documents into categories or an AI system that is used to detect duplicates among a large number of applications. Those tasks are of such narrow and limited nature that they pose only limited risks which are not increased through the use of an AI system in a context that is listed as a high-risk use in an annex to this Regulation. The second condition should be that the task performed by the AI system is intended to improve the result of a previously completed human activity that may be relevant for the purposes of the high-risk uses listed in an annex to this Regulation. Considering those characteristics, the AI system provides only an additional layer to a human activity with consequently lowered risk. That condition would, for example, apply to AI systems that are intended to improve the language used in previously drafted documents, for example in relation to professional tone, academic style of language or by aligning text to a certain brand messaging. The third condition should be that the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns. The risk would be lowered because the use of the AI system follows a previously completed human assessment which it is not meant to replace or influence, without proper human review. Such AI systems include for instance those that, given a certain grading pattern of a teacher, can be used to check ex post whether the teacher may have deviated from the grading pattern so as to flag potential inconsistencies or anomalies. The fourth condition should be that the AI system is intended to perform a task that is only preparatory to an assessment relevant for the purposes of the AI systems listed in an annex to this Regulation, thus making the possible impact of the output of the system very low in terms of representing a risk for the assessment to follow. That condition covers, inter alia, smart solutions for file handling, which include various functions from indexing, searching, text and speech processing or linking data to other data sources, or AI systems used for translation of initial documents. In any case, AI systems used in high-risk use-cases listed in an annex to this Regulation should be considered to pose significant risks of harm to the health, safety or fundamental rights if the AI system implies profiling within the meaning of Article 4, point (4) of Regulation (EU) 2016/679 or Article 3, point (4) of Directive (EU) 2016/680 or Article 3, point (5) of Regulation (EU) 2018/1725. To ensure traceability and transparency, a provider who considers that an AI system is not high-risk on the basis of the conditions referred to above should draw up documentation of the assessment before that system is placed on the market or put into service and should provide that documentation to national competent authorities upon request. Such a provider should be obliged to register the AI system in the EU database established under this Regulation. With a view to providing further guidance for the practical implementation of the conditions under which the AI systems listed in an annex to this Regulation are, on an exceptional basis, non-high-risk, the Commission should, after consulting the Board, provide guidelines specifying that practical implementation, completed by a comprehensive list of practical examples of use cases of AI systems that are high-risk and use cases that are not. |
Image thanks https://iblnews.org/
0 Comments