KIT | KIT-Bibliothek | Impressum | Datenschutz

The Concept of Identifiability in ML Models

von Maltzan, Stephanie 1
1 Zentrum für Angewandte Rechtswissenschaft (ZAR), Karlsruher Institut für Technologie (KIT)

Abstract:

Recent research indicates that the machine learning process can be reversed by adversarial attacks. These attacks can be used to derive personal information from the training. The supposedly anonymising machine learning process represents a process of pseudonymisation and is, therefore, subject to technical and organisational measures. Consequently, the unexamined belief in anonymisation as a guarantor for privacy cannot be easily upheld. It is, therefore, crucial to measure privacy through the lens of adversarial attacks and precisely distinguish what is meant by personal data and non-personal data and above all determine whether ML models represent pseudonyms from the training data.


Download
Originalveröffentlichung
DOI: 10.5220/0011081600003194
Zugehörige Institution(en) am KIT Zentrum für Angewandte Rechtswissenschaft (ZAR)
Publikationstyp Proceedingsbeitrag
Publikationsjahr 2022
Sprache Englisch
Identifikator ISBN: 978-9897585647
ISSN: 2184-4976
KITopen-ID: 1000157949
Erschienen in Proceedings of the 7th International Conference on Internet of Things, Big Data and Security - IoTBDS
Veranstaltung 7th International Conference on Internet of Things, Big Data and Security (IoTBDS 2022), Online, 22.04.2022 – 24.04.2022
Verlag SciTePress
Seiten 215–222
Serie IoTBDS
Schlagwörter Anonymisation, Pseudonymisation, ML Model, Adversarial Attacks, Privacy, Utility
Nachgewiesen in Dimensions
Scopus
KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft
KITopen Landing Page