EPALE MOOC: Ethical and Legal Regulation of AI, in particular the EU Artificial Intelligence Act

Author: Dr Adrienn Hadady-Lukács, PhD, University of Szeged, Faculty of Law and Political Sciences, Institute of Labour Relations and Social Security Training
This is a translated article. Original language is Hungarian. The translation was prepared on behalf of the Hungarian EPALE National Support Service.
When developing AI, it is crucial that it benefits society. In this context, AI must comply not only with the regulations, but also with ethical requirements that overlap with the regulations or apply in their absence.
This chapter will first review the ethical requirements for the development of AI, and then move on to presenting the legal framework for AI, highlighting the provisions of legislation adopted before AI became widespread and of particular relevance to AI. Finally, the chapter will provide an overview of the key provisions of the Artificial Intelligence Act adopted by the EU in 2024.
1. Introduction
Technological advances have reached the point where the question is not whether it is possible to create AI, but what kind. This raises the question of the moral responsibility of developers, in terms of the kind of AI to develop. Just because something is technically feasible, it is not certain, from an ethical perspective, that it should actually be implemented. Given the speed of technological development and the slowness of legislative processes, it is particularly important that developments comply with basic ethical standards in the absence of explicit legal provisions.
In addition to exploiting the many benefits of AI and fostering innovation, an important objective is to ensure that developments benefit humanity and society – essentially, to develop a so-called ethical AI. It is essential that AI is safe, human-centred and environmentally friendly.[1] This is not to say that AI cannot be detrimental to society. Its benefits and risks must be weighed against each other in a way that benefits humanity as a whole.
All technological innovation has its drawbacks as well as its benefits. For example, a computer damages the eyes and consumes more electricity, but it allows you to work more efficiently than a typewriter. Motor vehicles are more polluting, but faster than horse-drawn carriages. In the case of AI, too, the benefits should be weighed against the drawbacks and the latter should be mitigated. [2]
In addition to ethics, of course, compliance with the legislation must also be ensured. In this respect, it will be key not only to adopt legislation that already existed (before AI), and is equally relevant to AI, but also to adopt legislation that is specifically relevant to AI. In the context of ethical AI, it is particularly important that the legislation governing AI also reflects the requirement for human-centred, ethical AI – while not inhibiting innovation.
2. Ethical challenges of AI
The requirement of ethical AI expects AI developers and researchers to develop solutions and systems that benefit society as a whole – humanity itself. Basic ethical requirements should be taken into account in the development of AI in the absence of explicit legal provisions, and the legislation should be consistent with these principles.
According to Britannica, ethics is the discipline that deals with what is morally good and bad, or right and wrong. Ethics seeks answers to questions such as: ‘How should we live’? ‘Shall we aim at happiness or at knowledge, virtue, or the creation of beautiful objects?’ ‘Is it right to be dishonest for a good cause?’ ‘Is going to war justified in cases where it is likely that innocent people will be killed?’ ‘Is it wrong to clone a human being or destroy human embryos in medical research?’
The United Nations Educational, Scientific and Cultural Organisation (UNESCO) has compiled a short collection containing ethical dilemmas in the context of AI. This includes the issue of (gender) bias (just type the terms ‘schoolgirl’ and ‘schoolboy’ into a search engine and compare the results – ‘schoolgirl’ is likely to bring up many more results with sexual connotations than ‘schoolboy’); the use of AI in the context of justice (e.g. an algorithm to help a judge make a decision); the issue of AI-generated artworks (e.g. the ‘new’ Rembrandt art created by the AI known as the ‘Next Rembrandt’); and dilemmas related to self-driving vehicles (see the trolley problem discussed below).
AI can bring many benefits, but to reap the benefits it is essential that AI meets basic ethical requirements.
In addition to the potential benefits, the negative outcomes and risks must also be taken into account. A number of renowned experts, including Stephen Hawking, Bill Gates and Elon Musk, have expressed concerns about AI, urging developers to be cautious. On the silver screen, too, we often see depictions of AI and robots that are less than flattering – often destructive, seeking to enslave humanity. Classic examples include the Terminator and the Matrix movies. There are also more recent adaptations, such as several episodes of ‘Black Mirror’ , which highlight the potential dangers of futuristic technology.
The requirement for ethical, reliable AI that benefits humanity also features among the objectives of many international organisations. The United Nations (UN), the Organisation for Economic Cooperation and Development (OECD) and the European Union (EU) have issued guidelines and recommendations on the basic requirements for AI. Among the non-legally binding documents adopted by the EU[3], we are going to highlight the Ethics Guidelines for Trustworthy AI adopted by the High-Level Expert Group on AI in 2019.
According to the guidelines, AI must meet 3 basic requirements:
- it must be lawful, i.e. it must respect the legislation;
- it must be ethical, i.e. it must respect ethical principles and values; and
- it must be robust, both from a technical perspective and in respect of the social environment.
The ethical principles and values enshrined in the document include respect for human autonomy, harm prevention, fairness and accountability, the protection of vulnerable groups (e.g. children, persons with disabilities), and the mitigation of risks posed by AI.
Building on these principles, the guidelines set out requirements that all AI systems must meet. These requirements are (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) social and environmental well-being, and (7) accountability.
In practice, however, it is difficult to judge what is ethical and how to programme the machine. For example, what should a self-driving car be programmed to do if it has to choose between causing two accidents? It is often difficult to make decisions even as a human being, as shown by the trolley problem. The trolley problem is an ethical thought experiment where a decision has to be made in the following situation: a trolley is speeding down the tracks, heading towards five people who, if hit, will surely all die. However, we can decide to intervene. If we pull the switch next to the track, the trolley will change direction and ‘only’ hit one person on the other track. What is the right choice? What will we programme the machine to do?
You can try online how to decide if you were standing next to the point of switch. In the ‘game’ Absurd Trolley Problems, in addition to the classic variation, you can decide whether to let the trolley run over the original five people or divert it to destroy all your savings; whether to accommodate a rich man lying on the track who offers a fortune if you divert the trolley in the direction of the other person, etc.
3. Legislation that have implications on AI
AI does not exist in a regulatory vacuum. Legislation existed before AI, and these rules, while not specifically adopted with AI in mind, should be applied appropriately to AI as well. In the context of the legal challenges already discussed, such a rule is, for example, the EU General Data Protection Regulation (commonly known as the GDPR), which aims to enforce the right to the protection of personal data, whether processed by AI or by other means.
Another example is the right to equal treatment. Several EU directives provide for equal treatment, including non-discrimination in employment. These rules apply regardless of whether it is a human employer or an algorithmic management system that is responsible for the violation of equal treatment.
For example, at EU level, it is worth highlighting Council Directive 2000/43/EC of 29 June 2000 implementing the principle of equal treatment between persons irrespective of racial or ethnic origin and Council Directive 2000/78/EC of 27 November 2000 establishing a general framework for equal treatment in employment and occupation. As regards Hungarian legislation, the Equal Treatment Act, i.e. Act CXXV of 2003 on equal treatment and the promotion of equal opportunities is noteworthy.
However, these more general pieces of legislation include provisions that are (may be) more closely related to particular aspects of AI. An example is Article 22 of the GDPR, entitled ‘Automated individual decision-making, including profiling’. Article 22 essentially provides that the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. (For example, a fully automated decision about a bank loan.) Although automated decision-making does not necessarily imply the use of AI, the role and use of AI-based automated decision-making is likely to increase in the future as the technology gains ground. However, there are exceptions to the prohibition laid down in Article 22 of the GDPR. Such decision-making may still be allowed if:
- is necessary for entering into, or performance of, a contract between the data subject and a data controller;
- is authorised by national or EU law; or
- the data subject has given his or her explicit consent.
The GDPR also sets out further guarantees that such decision-making must respect the rights, freedoms and legitimate interests of the data subject. In addition, in certain cases, the data subject has the right to request human intervention, to express his or her point of view, or to object to the decision.
Another example is the EU Directive on improving working conditions in platform work (the so-called Platform Work Directive) in the context of algorithmic management.
The rise of platform work and the associated legal challenges, in particular the question of the classification of the legal relationship (is the platform worker an employee at all?) and algorithmic management, have made it necessary to regulate this type of work. The European Commission published a Proposal for a Directive on improving working conditions in platform work in 2021, which was adopted by the European Parliament on 24 April 2024 after lengthy negotiations.[4]
The objectives of the Platform Work Directive include facilitating the correct classification of employment status and ensuring fairness, transparency, and accountability of algorithmic management. In the context of algorithmic management, the Platform Work Directive, among other things, prohibits the processing of certain particularly sensitive data (e.g. data on the emotional or psychological state of a worker), and imposes transparency requirements and human oversight, including a biannual monitoring of the impact of the system on platform workers. It requires certain decisions to be justified in writing and, if necessary, revised and corrected.[5]
4. EU Regulation on artificial intelligence
Despite the fact that existing legislation also applies to AI, there is a need for legislation specifically adapted to its specificities. In April 2021, the EU published its first draft Regulation on AI. It is the first legislation in the world to regulate AI in a comprehensive way.[6] According to the proposal, the draft regulation creates a proportionate regulatory framework that does not hamper innovation, while addressing the risks associated with AI. The Regulation, which is binding for all Member States (Regulation (EU) 2024/1689 on artificial intelligence; hereinafter referred to as the Regulation), was finally adopted in 2024, entered into force on 1 August 2024 and, with some exceptions, its provisions will apply from 2 August 2026, after a two-year preparation period.
The Regulation is a type of EU legislation, and is a binding legislative act. It applies automatically and uniformly in all Member States, without the need for a Member State to transpose it into national law. Derogations from the latter are possible only in exceptional cases, where the Regulation itself authorises Member States to do so.
In addition to the Regulation, the Commission published, in September 2022, the proposal for the AI Liability Directive. The proposal for a Directive establishes liability rules for various AI-based solutions, and complements and modernises EU rules on non-contractual civil liability, introducing specific rules for damage caused by AI systems.[7]
According to Article 1 of the Regulation, its aim is to:
- improve the functioning of the internal market;
- promote the uptake of human-centric and trustworthy AI, ensuring a high level of health, safety, and fundamental rights protection in the EU; and
- support innovation.
The scope of the Regulation is set out in Article 2, in reference to whom, in what areas and where the Regulation applies. The personal scope of the Regulation (i.e. whom it covers) applies to service providers, users of AI systems, importers, distributors, product manufacturers and authorised representatives of service providers outside the EU. The territorial scope of the Regulation (i.e. the geographical area in which it applies) covers not only the EU, but also, in certain cases, operators established outside the EU, if their activities extend to the territory of the EU. The material scope of the Regulation (i.e. what the legislation applies to), in principle, covers AI systems.[8]
The exact definition of each of these terms (e.g. AI system, distributor, importer, etc.) is defined in Article 3 of the Regulation.
As for the temporal scope (the period in which the legislation applies), the Regulation will apply from 2 August 2026, with some exceptions. The scope of the Regulation covers both the private and the public sector. The Regulation also specifies the situations in which it does not apply. This includes, for example, national security, scientific research and development, and exclusive personal use by natural persons.[9]
The Regulation includes a risk-based approach, assigning different obligations to different levels of risk. The Regulation distinguishes between 4 levels of risk: unacceptable risk, high risk, low risk and minimal risk.
AI systems classified as posing unacceptable risk include systems that are in conflict with EU law or fundamental rights, and are, in principle, prohibited. Such systems include social scoring or crime prediction based on profiling or the assessment of personality traits and characteristics.
An extreme representation of a similar system is shown in Minority Report, a 2002 science fiction film in which would-be offenders are arrested before they have committed the crime.
High-risk AI systems, which pose a high risk to the health and safety of natural persons or to their fundamental rights, are, in principle, applicable, but must meet a number of requirements. Such high-risk AI systems include AI systems that fulfil the following two conditions [Article 6(1) of the Regulation]:
- “the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;
- the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.”
In addition, the systems referred to in Annex III to the Regulation shall be considered as high-risk AI systems [Article 6(2) of the Regulation]. Such systems include, for example, AI systems used in employment, employee management, and access to self-employment (e.g. AI systems intended for use in the recruitment or selection of natural persons; AI systems used to make certain decisions about work); however, bank credit assessments made by an AI system may also fall under this scope. (Annex III of the Regulation) The Regulation also provides for an exception: a system is not considered to be high-risk, even if listed in Annex III, if it does not present a significant risk of harm to the health, safety or fundamental rights of natural persons, among others, by not significantly affecting the outcome of decision-making. [Article 6(3) of the Regulation].
High-risk AI systems must meet several requirements. These requirements include risk management, documentation and record keeping, transparency and information, human oversight, accuracy, stability and cybersecurity. (Chapter III, Section 2 of the Regulation)
Other, low-risk AI systems must, in principle, meet transparency requirements. Such systems are chatbots, where users must be made aware that they are not dealing with a human administrator, but with a chatbot. (Chapter IV of the Regulation)
A good example for AI systems with minimal risk would be spam filters. These systems are essentially free to use, subject to the requirements of ethical development.[10]
Compliance with the Regulation is monitored by national authorities and the so-called European AI Office, composed of representatives of Member States and specialised sub-groups of national regulators and other competent authorities. The AI Office plays an important advisory and guidance role, and is crucial for the consistent and harmonised application of the Regulation across the EU.[11]
Chapter XII of the Regulation requires Member States to define penalties for infringements of the Regulation. These penalties must be effective, proportionate and dissuasive. Such a sanction is an administrative fine. However, the Regulation sets the maximum amount of administrative fines that can be imposed for certain infringements. The maximum fine that can be imposed for non-compliance with the prohibition on AI systems posing an unacceptable risk is €35 million or, if the offender is a company, up to 7% of its total global annual turnover in the previous financial year. The higher of the two amounts should be taken into account. [Article 99(3) of the Regulation]
5. Summary
AI is no longer a product of the distant future. We are living in an exciting time, when we can experience first-hand how AI is manifesting itself in everyday life. Just think of ChatGPT, digital assistants on smartphones and computers, and more. This is just the tip of the iceberg – AI has huge potential. In order to realise this potential, it is important that the development of AI meets certain basic requirements. These requirements can be moral, ethical, or legal in nature.
The ethical requirements (the overall requirement for human-centric, trustworthy AI for the benefit of society) have been reflected in various legally non-binding guidelines and positions adopted by international organisations, and have been leaked into the EU AI Regulation, and – even though nothing can be said for sure at this point – will presumably be reflected in future AI legislation.
Back to the MOOC home page
Recommended readings, references
Európai Bizottság: Mesterséges intelligencia – Kérdések és válaszok. 2024. augusztus 1. Elérhető: https://ec.europa.eu/commission/presscorner/detail/hu/QANDA_21_1683 (Letöltés ideje: 2024. 11. 11.)
Hadady-Lukács Adrienn: Az EU és a belső piac: az adatvédelem digitális kihívásai, különös tekintettel a mesterséges intelligenciára. EU Jog online 2024/3.
Pók László: "Felkészülés az MI Rendelet alkalmazására" sorozat. Elérhető: https://gdpr.blog.hu/tags/mesters%C3%A9ges_intelligencia
Referenced sources
[1] Coursera: AI Ethics: What It Is and Why It Matters. 2024. Elérhető : https://www.coursera.org/articles/ai-ethics (Letöltés ideje: 2024. 11. 11.)
[2] Négyesi Imre: A mesterséges intelligencia és az etika. Hadtudomány (2020) 1. szám, 106. o.
[3] Hadady-Lukács Adrienn: Az EU és a belső piac: az adatvédelem digitális kihívásai, különös tekintettel a mesterséges intelligenciára. EU Jog online (2024) 3. szám
[4] Az Európai Parlament álláspontja, amely első olvasatban 2024. április 24-én került elfogadásra a platformalapú munkavégzés munkafeltételeinek javításáról szóló (EU) 2024/... európai parlamenti és tanácsi irányelv elfogadására tekintettel. (P9_TC1-COD(2021)0414.)
[5] Ld. bővebben: Kiss Gergely Árpád: A platformmunka szabályozási perspektívái az új uniós irányelv tükrében. Közjogi Szemle (2023) 3. szám. 65-70. o.; Lukács Adrienn: Platform munkavégzés és egyenlő bánásmód, különös tekintettel az Európai Unió platform irányelvére. In: Forum: Acta Juridica et Politica, megjelenés alatt
[6] European Commission: AI Act. Elérhető: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (Letöltés ideje: 2024. 11. 11.)
[7] Európai Bizottság: Kérdések és válaszok: A mesterséges intelligenciával kapcsolatos felelősségről szóló irányelv, 2022. Elérhető: https://ec.europa.eu/commission/presscorner/detail/hu/QANDA_22_5793 (Letöltés ideje: 2024. 11. 11.)
[8] Pók László: Felkészülés az MI Rendelet alkalmazására - 2. rész: az MI Rendelet hatálya. GDPR blog, 2024. Elérhető: https://gdpr.blog.hu/2024/06/03/felkeszules_az_mi_rendelet_alkalmazasara_2_resz_az_mi_rendelet_hatalya (Letöltés ideje: 2024. 11. 11.)
[9] Pók László: Felkészülés az MI Rendelet alkalmazására - 2. rész: az MI Rendelet hatálya. GDPR blog, 2024. Elérhető: https://gdpr.blog.hu/2024/06/03/felkeszules_az_mi_rendelet_alkalmazasara_2_resz_az_mi_rendelet_hatalya (Letöltés ideje: 2024. 11. 11.)
[10] Pók László: Felkészülés az MI Rendelet alkalmazására - 4. rész: mit jelent a kockázatalapú megközelítés? GDPR blog, 2024. Elérhető: https://gdpr.blog.hu/2024/07/02/felkeszules_az_mi_rendelet_alkalmazasara_4_resz_mit_jelent_a_kockazatalapu_megkozelites (Letöltés ideje: 2024. 11. 11.)
[11] Európai Bizottság: Mesterséges intelligencia – Kérdések és válaszok. 2024. augusztus 1. Elérhető: https://ec.europa.eu/commission/presscorner/detail/hu/QANDA_21_1683 (Letöltés ideje: 2024. 11. 11.)