KIT | KIT-Bibliothek | Impressum | Datenschutz

L1RA: Dynamic Rank Assignment in LoRA Fine-Tuning

Singh, Raul; Brunello, Nicolò; Scotti, Vincenzo ORCID iD icon 1; Carman, Mark
1 Institut für Informationssicherheit und Verlässlichkeit (KASTEL), Karlsruher Institut für Technologie (KIT)

Abstract (englisch):

The ability of Large Language Models (LLMs) to solve complex tasks has made them crucial in the development of AI-based applications. However, the high computational requirements to fine-tune these LLMs on downstream tasks pose significant challenges, particularly when resources are limited. In response to this challenge, we introduce L1RA, a novel technique aimed at dynamically distributing the rank of low-rank adapters during fine-tuning using LORA. Given a rank budget (i.e., total sum of adapters rank), L1RA leverages L1 regularisation to prune redundant ranks and redistribute them across adapters, thereby optimising resource utilisation. Through a series of comprehensive experiments, we empirically demonstrate that L1RA maintains comparable or even reduced computational overhead compared to other LORA variants, including the vanilla approach, while achieving same or better performances. Moreover, the post-training analysis of rank distribution unveiled insights into the specific model components requiring the most adaptation to align with the task objective: the feed-forward layers and the attention output projection. These results highlight the efficacy of L1RA in not only enhancing the efficiency of LLM fine-tuning, but also in providing valuable diagnostic information for model refinement and customisation. ... mehr


Preprint §
DOI: 10.5445/IR/1000185880
Veröffentlicht am 20.10.2025
Cover der Publikation
Zugehörige Institution(en) am KIT Institut für Informationssicherheit und Verlässlichkeit (KASTEL)
Publikationstyp Proceedingsbeitrag
Publikationsjahr 2025
Sprache Englisch
Identifikator KITopen-ID: 1000185880
HGF-Programm 46.23.01 (POF IV, LK 01) Methods for Engineering Secure Systems
Erschienen in Proceedings of the 8th International Conference on Natural Language and Speech Processing (ICNLSP-2025). Hrsg.: M. Abbas
Veranstaltung 8th International Conference on Natural Language and Speech Processing (ICNLSP 2025), Ottensee, Dänemark, 25.08.2025 – 27.08.2025
Verlag Association for Computational Linguistics (ACL)
Seiten 360–373
Externe Relationen Siehe auch
Schlagwörter large language model, parameter-efficient fine-tuning, lora, l1ra
KIT – Die Universität in der Helmholtz-Gemeinschaft
KITopen Landing Page