Die Europäische KI-Verordnung - Der aktuelle Kommissionsentwurf und praktische Auswirkungen - Noerr
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
/ Die Europäische KI-Verordnung Der aktuelle Kommissionsentwurf und praktische Auswirkungen Alicante Berlin Digital Talks – 18. Mai 2021 Bratislava Brüssel Budapest Bukarest Dresden Marieke Merkle, Rechtsanwältin Düsseldorf Frankfurt/M. Dr. David Bomhard, Rechtsanwalt Hamburg London Moskau München New York Prag Warschau noerr.com
/Live-Webinar: Ablauf & Hinweise Fragen im Chat Sie haben die Möglichkeit live über den Chat Fragen zu stellen. Im öffentlichen Chat sind Ihre Fragen für alle Teilnehmer und in der Aufzeichnung sichtbar, formulieren Sie diese bitte so, dass keine Rückschlüsse auf konkrete Fälle vorgenommen werden können. Wenn Sie anonym bleiben möchten, stellen Sie bitte keine Fragen im öffentlichen Chat, sondern nutzen Sie im Feld „An:“ die Auswahl „Nur Moderatoren“. Bitte haben Sie Verständnis dafür, dass wir ggf. nicht jede Frage beantworten können. Sie können gerne im Nachgang auf uns zukommen. Datenschutzhinweis: Bitte beachten Sie, dass der Veranstalter Video- und Audioaufzeichnungen des Webinars machen kann und diese ggf. im Internet verbreitet. Wenn Sie anonym bleiben möchten, stellen Sie bitte keine Fragen im öffentlichen Chat (hier ist Ihr Name und Ihre Frage für jeden Teilnehmer sichtbar), sondern nutzen Sie für Fragen im Feld „An:“ die Auswahl „Nur Moderatoren“.
/ Article 3, point 1 - Definitions ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with 5
/ Annex I - Artificial intelligence techniques and approaches (a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimization methods 6
/ Article 3 - Definitions (2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge; (4) ‘user’ means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity 7
/ Risikobasierter Ansatz KI-Verbote (Art. 5) Hochrisiko-KI (Art. 6-51) Besondere Transparenz (Art. 52) Voluntary Code of Conduct (Art. 69) 8
/ Risikobasierter Ansatz KI-Verbote (Art. 5) Hochrisiko-KI (Art. 6-51) Besondere Transparenz (Art. 52) Voluntary Code of Conduct (Art. 69) 9
/ Article 5 (1) - Prohibited artificial intelligence practices 1. The following artificial intelligence practices shall be prohibited: (a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm; 10
/ Article 5 (1) - Prohibited artificial intelligence practices 1. The following artificial intelligence practices shall be prohibited: […] (c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity; 11
/ Article 5 (1) - Prohibited artificial intelligence practices 1. The following artificial intelligence practices shall be prohibited: […] (d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives: (i) the targeted search for specific potential victims of crime, including missing children; (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack; 12
/ Article 71 - Penalties 1. […] Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and shall take all measures necessary to ensure that they are properly and effectively implemented. […] 3. The following infringements shall be subject to administrative fines of up to 30 000 000 EUR or, if the offender is company, up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher: (a) non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5; (b) non-compliance of the AI system with the requirements laid down in Article 10. 4. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 20 000 000 EUR or, if the offender is a company, up to 4 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. 5. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 10 000 000 EUR or, if the offender is a company, up to 2 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. 13
/ Risikobasierter Ansatz KI-Verbote (Art. 5) Hochrisiko-KI (Art. 6-51) Besondere Transparenz (Art. 52) Voluntary Code of Conduct (Art. 69) 14
/ Article 6 - Classification rules for high-risk AI systems “1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled: (a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II; (b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II. 2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk.” 15
/ Article 2 Nr. 2 - Scope 2. For high-risk AI systems that are safety components of products or systems, or which are themselves products or systems, falling within the scope of the following acts, only Article 84 of this Regulation shall apply: (a) Regulation (EC) 300/2008; (b) Regulation (EU) No 167/2013; (c) Regulation (EU) No 168/2013; (d) Directive 2014/90/EU; (e) Directive (EU) 2016/797; (f) Regulation (EU) 2018/858; (g) Regulation (EU) 2018/1139; (h) Regulation (EU) 2019/2144. Von William M. Connolley (Diskussion · Beiträge) - Eigenes Werk, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1233520 16
/ Annex III – High-risk AI systems referred to in Article 6 (2) High-risk AI systems pursuant to Article 6 (2) are the AI systems listed in any of the following areas: 1. Biometric identification and categorisation of natural persons 2. Management and operation of critical infrastructure 3. Education and vocational training 4. Employment, workers management and access to self-employment 5. Access to and enjoyment of essential private services and public services and benefits 6. Law enforcement 7. Migration, asylum and border control management 8. Administration of justice and democratic processes 17
/ Pflichten des Providers bei ‘high-risk AI‘ • Risk management system (Article 9) • Data and data governance (Article 10) • Technical documentation (Article 11) • Record-keeping (Article 12) • Transparency and provision of information to users (Article 13) • Human oversight (Article 14) • Accuracy, robustness and cybersecurity (Article 15) 18
/ Article 10 Nr. 3 - Data and data governance Training, validation and testing data sets shall be subject to appropriate data governance and management practices. Those practices shall concern in particular, (a) the relevant design choices; (f) examination in view of possible biases; 3. Training, validation and testing data sets shall be relevant, representative, free of errors and complete. […] 19
/ Transparency Quelle: https://twitter.com/kirkdborne/status/ 898594510976012288?lang=de 20
/ Article 13 Nr. 1 - Transparency and provision of information to users 1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately.[…]. 21
/ Quelle: https://theblue.ai /blog/lime- models- explanation/ 22
/ 23
/ Article 14 Nr. 1, 4 - Human oversight 1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use. […] 4. The measures referred to in paragraph 3 shall enable the individuals to whom human oversight is assigned to do the following, as appropriate to the circumstances: (a) fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible; 24
/ Article 14 Nr. 4 - Human oversight 4. The measures referred to in paragraph 3 shall enable the individuals to whom human oversight is assigned to do the following, as appropriate to the circumstances: (e) be able to intervene on the operation of the high-risk AI system or interrupt the system through a “stop” button or a similar procedure. 25
/ Article 16 - Obligations of providers of high-risk AI systems Providers of high-risk AI systems shall: (e) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure, prior to its placing on the market or putting into service; (f) comply with the registration obligations referred to in Article 51; (i) to affix the CE marking to their high-risk AI systems to indicate the conformity with this Regulation in accordance with Article 49; 26
/ Article 2 - Scope 1. This Regulation applies to: (a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country; (b) users of AI systems located within the Union; (c) providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union; 27
/ Article 28 - Obligations of distributors, importers, users or any other third-party 1. […] third-party shall be considered a provider […], in any of the following circumstances: (a) they place on the market or put into service a high-risk AI system under their name or trademark; (b) they modify the intended purpose of a high-risk AI system already placed on the market or put into service; (c) they make a substantial modification to the high-risk AI system. 28
/ Article 53 - AI regulatory sandboxes 1. AI regulatory sandboxes […] shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time […] […] shall take place under the direct supervision and guidance by the competent authorities with a view to ensuring compliance with the requirements of this Regulation and, […] Union and Member States legislation […]. 3. The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities. […] 4. Participants in the AI regulatory sandbox shall remain liable […] for any harm inflicted on third parties as a result from the experimentation taking place in the sandbox. 29
/ Risikobasierter Ansatz KI-Verbote (Art. 5) Hochrisiko-KI (Art. 6-51) Besondere Transparenz (Art. 52) Voluntary Code of Conduct (Art. 69) 30
/ Article 52 - Transparency obligations for certain AI systems 1. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. […] 2. Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. […] 3. Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated. However, the first subparagraph shall not apply where […] it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences […]. 31
/Vielen Dank für Ihre Teilnahme! Freuen Sie sich auf die nächsten Digital Talks: Am 19. Mai 2021, 15.00 Uhr Alicante Berlin informieren Lucie Gerhard und Denis Förster zum Thema Bratislava Brüssel Cyberangriffe: Smart Practice aus Unternehmens- und Kanzleisicht Budapest Bukarest Dresden Düsseldorf Frankfurt/M. Hamburg Am 29. Juni 2021, 13.30 Uhr London Moskau informieren Dr. David Bomhard und Andreas Daum zum Thema München New York Cloud AGB und regulierte Unternehmen Prag Warschau noerr.com Eine Übersicht über kommende Veranstaltungen und die entsprechenden Anmeldelinks finden Sie auf unserer Homepage unter Veranstaltungen.
/ Vielen Dank! Marieke Merkle ist spezialisiert auf die Dr. David Bomhard ist Physiker und rechtliche Beratung bei Rechtsanwalt, spezialisiert auf die rechtliche Digitalisierungsprozessen und komplexen IT- Beratung bei komplexen IT-Projekten (insb. IT- Projekten nationaler und internationaler Outsourcing, Plattformen, Einsatz künstlicher Mandanten in diversen Branchen Intelligenz). Ein besonderer Schwerpunkt liegt (insbesondere agile Softwareentwicklung, IT- auf IT- und Cloud-Auslagerungen im BaFin- Outsourcing, Cloud Computing, Umfeld (insb. Versicherungen und Banken). Automatisierung von Daneben berät Herr Dr. Bomhard regelmäßig Unternehmensprozessen, Datenschutz) sowie zu Datennutzungs-Verträgen, Software as a auf die rechtliche Beratung im Service, Cybersecurity, Softwareurheberrecht, Zusammenhang mit Software-as-a Service IT-Transaktionen und Datenschutzrecht. (SaaS), Datennutzungs-Verträgen, Weiterhin ist er Dozent für Marieke Merkle Dr. David Bomhard Cybersecurity, IT-Transaktionen und Verhandlungsmanagement sowie Rechtsanwältin Rechtsanwalt Softwareurheberrecht. Lehrbeauftragter für IT- und Datenrecht an Associate Senior Associate der Universität Passau. +49 89 28628 227 Kompetenzen +49 89 28628 2610 marieke.merkle@noerr.c ◦ IT-Recht (Softwareverträge, Outsourcing, David.Bomhard@noerr.c Kompetenzen om Digitalisierung) om ◦ IT-Recht ◦ Cloud- Computing ◦ Outsourcing ◦ Open Source Software ◦ Agile Softwareentwicklung ◦ KI/Big Data ◦ Cloud Computing/Software as a Service ◦ Allgemeine Geschäftsbedingungen ◦ Service Level Agreements ◦ Datenschutz ◦ Künstliche Intelligenz ◦ Urheberrecht ◦ Datenschutz/IT-Sicherheit
You can also read