KIT | KIT-Bibliothek | Impressum | Datenschutz

Leveraging Highly Approximated Multipliers in DNN Inference

Zervakis, Georgios; Frustaci, Fabio; Spantidi, Ourania; Anagnostopoulos, Iraklis; Amrouch, Hussam; Henkel, Jörg 1
1 Institut für Technische Informatik (ITEC), Karlsruher Institut für Technologie (KIT)

Abstract:

In this work, we present our control variate approximation technique that enables the exploitation of highly approximate multipliers in Deep Neural Network (DNN) accelerators. Our approach does not require retraining and significantly decreases the induced error due to approximate multiplications, improving the overall inference accuracy. As a result, control variate approximation enables satisfying tight accuracy loss constraints while boosting the power savings. Our experimental evaluation, across six different DNNs and several approximate multipliers, demonstrates the versatility of control variate technique and shows that compared to the accurate design, it achieves the same performance, 45% power reduction, and less than 1% average accuracy loss. Compared to the corresponding approximate designs without using our technique, the error-correction of the control variate method improves the accuracy by 1.9x on average.


Verlagsausgabe §
DOI: 10.5445/IR/1000180859
Veröffentlicht am 11.04.2025
Originalveröffentlichung
DOI: 10.1109/ACCESS.2025.3550520
Scopus
Zitationen: 1
Web of Science
Zitationen: 1
Dimensions
Zitationen: 1
Cover der Publikation
Zugehörige Institution(en) am KIT Institut für Technische Informatik (ITEC)
Publikationstyp Zeitschriftenaufsatz
Publikationsjahr 2025
Sprache Englisch
Identifikator ISSN: 2169-3536
KITopen-ID: 1000180859
Erschienen in IEEE Access
Verlag Institute of Electrical and Electronics Engineers (IEEE)
Band 13
Seiten 47897–47911
Nachgewiesen in Web of Science
Scopus
OpenAlex
Dimensions
KIT – Die Universität in der Helmholtz-Gemeinschaft
KITopen Landing Page