KIT | KIT-Bibliothek | Impressum | Datenschutz

Non-Uniform Adversarially Robust Pruning

Zhao, Qi 1,2; Königl, Tim 1,2; Wressnegger, Christian ORCID iD icon 1,2
1 Kompetenzzentrum für angewandte Sicherheitstechnologie (KASTEL), Karlsruher Institut für Technologie (KIT)
2 Institut für Informationssicherheit und Verlässlichkeit (KASTEL), Karlsruher Institut für Technologie (KIT)

Abstract:

Neural networks often are highly redundant and can thus be effectively compressed to a fraction of their initial size using model pruning techniques without harming the overall prediction accuracy. Additionally, pruned networks need to maintain robustness against attacks such as adversarial examples. Recent research on combining all these objectives has shown significant advances using uniform compression strategies, that is, parameters are compressed equally according to a preset compression ratio. In this paper, we show that employing non-uniform compression strategies allows to improve clean data accuracy as well as adversarial robustness under high overall compression—in particular using channel pruning. We leverage reinforcement learning for finding an optimal trade-off and demonstrate that the resulting compression strategy can be used as a plug-in replacement for uniform compression ratios of existing state-of-the-art approaches.


Verlagsausgabe §
DOI: 10.5445/IR/1000148112
Veröffentlicht am 01.12.2023
Cover der Publikation
Zugehörige Institution(en) am KIT Institut für Informationssicherheit und Verlässlichkeit (KASTEL)
Institut für Theoretische Informatik (ITI)
Kompetenzzentrum für angewandte Sicherheitstechnologie (KASTEL)
Publikationstyp Proceedingsbeitrag
Publikationsdatum 25.07.2022
Sprache Englisch
Identifikator KITopen-ID: 1000148112
HGF-Programm 46.23.01 (POF IV, LK 01) Methods for Engineering Secure Systems
Erschienen in Proc. of the 1st International Conference on Automated Machine Learning (AutoML)
Veranstaltung 1st International Conference on Automated Machine Learning (AutoML 2022), Baltimore, MD, USA, 25.07.2022 – 27.07.2022
Externe Relationen Abstract/Volltext
Schlagwörter SECML, AutoML, Neural Networks, Compression, Robustness
Nachgewiesen in Scopus
KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft
KITopen Landing Page