On April 21, 2021, the European Union (EU) proposed new artificial intelligence (AI) legislation called the EU Artificial Intelligence (AI) Act or the European AI Regulation. This proposed legislation is the first of its kind in the EU and is intended to regulate the use of AI within the EU. It is a comprehensive set of regulations designed to ensure that AI systems in the EU are safe, transparent and ethical. Since the topic of AI is currently being heavily discussed around the chatbot ChatGPT, we would like to clearly present and explain the most important aspects of the EU AI law in this article.
Key elements of the EU AI Act
Although the topic of artificial intelligence has been on the public agenda of European institutions since 2018 and the first draft law from the EU Commission was submitted to Parliament in 2021, the European institutions are not expected to reach an agreement until this year or next. It is not yet clear when the regulation will have to be implemented in all member states. Deviations in the final law from the following key points are therefore possible. [1, 2]
- Prohibition of certain AI systems: The proposed EU AI law prohibits certain AI systems that are considered harmful to individuals or society. These include systems that manipulate human behavior, social scoring, and AI systems used for mass surveillance.
- High-risk AI systems: The EU AI Act classifies certain AI systems as high-risk. These include AI systems used in critical infrastructure, such as transport and healthcare, or in public services, such as law enforcement or border control.
- Transparency: the proposed law requires that systems and algorithms be transparent and explainable. This means that individuals must be informed when they interact with AI systems and they must receive information about how the AI system works as well as what data it uses.
- Privacy: the AI regulation requires that AI systems respect the privacy of individuals. This means that these systems must be designed to protect personal data and give individuals control over their data.
- Human oversight: the law also requires that AI systems with high risk must be supervised by humans, meaning there must be a specific person responsible for the AI’s decisions.
- Testing & Certification: the EU AI Act requires that high-risk AI systems be tested and certified before they can go live. This is to ensure that these systems are safe, reliable, and perform as expected.
Impact on European Society & Business
As might be expected, both important advantages and disadvantages of the EU AI Act for society and companies are the subject of public controversy. We limit ourselves here to what we consider to be the most important implications in the aforementioned context. [3, 4]
- Greater trust in AI: The proposed European AI Regulation’s requirements for transparency, explainability, and human oversight are intended to increase trust in AI. By ensuring that AI systems are safe, reliable, and ethical, people could become more willing to interact with AI systems, which could lead to increased adoption of AI technologies.
- Protection of fundamental rights: The draft law is intended to strengthen fundamental rights such as the protection of privacy and to counteract discrimination. Thus, some negative effects of AI on society could be prevented, at least in theory.
- Promoting innovation: The proposed EU AI law’s requirements for testing and certification of higher-risk systems could promote technological innovation by creating a level playing field for companies that develop and deploy AI. In addition, mitigating risk through appropriate testing may increase willingness to invest in modern AI technologies.
- Costs & Effort: Contrary to the largely positive effects so far, the new law may lead to higher costs and more bureaucracy for companies and consumers. In particular, high-risk systems may incur significant costs for all stakeholders, including the government by verifying compliance with these regulations.
- Liability: The new draft law creates a clear basis for liability for companies that use AI. If an AI system causes harm, the company or persons responsible for the system could be held liable.
- Disparate impact: Because the draft law fundamentally relates to AI, uneven impacts on different industries and sectors are possible, i.e., some industries could be more affected by the law than others.
Conclusion
The EU AI law has both advantages and disadvantages. By ensuring that AI systems are safe, transparent, and ethical, the law could help increase trust in AI and prevent negative impacts of AI on society. In this context, a broad social discussion is necessary on what “ethics” actually means in the context of such highly complex computer systems and how such systems can and must be concretely reviewed and evaluated. However, the law may certainly also increase bureaucracy and costs for all stakeholders, especially businesses, with some industries more affected than others. For this reason, some business associations and political organizations warn against overregulation. One criticism is that a general law on AI is fundamentally not sensible in view of the diversity of algorithms and possible applications, and that sector-specific regulations are preferable. Data protectionists and activists, on the other hand, criticize loopholes with regard to fundamental rights and data protection. Thus, it will be important to observe the implementation of the law and its effects on society over time and to make subsequent adjustments if necessary. [3, 4, 5]
Sources
- European Comission (2023): A European approach to artificial intelligence. Online at https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
- Bertuzzi, Luca (2023): KI-Gesetz – EU-Parlament schlägt Änderungen vor. Online at https://www.euractiv.de/section/innovation/news/ki-gesetz-eu-parlament-schlaegt-aenderungen-vor/
- Kulbatzki, Josefine (2022): Wie die EU und die USA Algorithmen regulieren wollen. Online at: https://netzpolitik.org/2022/ai-act-vs-algorithmic-accountability-act-wie-die-eu-und-die-usa-algorithmen-regulieren-wollen/
- Kerkmann, Christof (2022): Künstliche Intelligenz – Wirtschaft warnt vor “massiven Einschränkungen” durch AI Act. Online at https://www.handelsblatt.com/technik/it-internet/eu-regulierung-kuenstliche-intelligenz-wirtschaft-warnt-vor-massiven-einschraenkungen-durch-ai-act/28850684.html
- Wedig, Marco (2022): KI-Verordnung der EU. Es gibt Schlupflöcher in diesem Gesetzesentwurf. Online at: https://www.spiegel.de/deinspiegel/ki-verordnung-der-eu-es-gibt-schlupfloecher-in-diesem-gesetzentwurf-a-303ddb95-5a0b-4694-9a5f-204e01bbde25
Pascal founded ViOffice together with Jan in the fall of 2020. He mainly takes care of marketing, finance and sales. After his degrees in political science, economics and applied statistics, he continues to work in scientific research. With ViOffice, he wants to provide access to secure software from Europe for everyone and especially support non-profit associations in their digitalization.