KIT | KIT-Bibliothek | Impressum | Datenschutz

CyberCertBench: Evaluating LLMs in Cybersecurity Certification Knowledge

Keppler, Gustav ORCID iD icon 1; Elbez, Ghada ORCID iD icon 1; Hagenmeyer, Veit 1
1 Institut für Automation und angewandte Informatik (IAI), Karlsruher Institut für Technologie (KIT)

Abstract:

The rapid evolution and use of Large Language Models (LLMs) in professional workflows require an evaluation of their domain-specific knowledge against industry standards. We introduceCyberCertBench, a new suite of Multiple Choice Question Answering (MCQA) benchmarks derived from industry recognized certifications. CyberCertBench evaluates LLM domain knowledgeagainst the professional standards of Information Technology cybersecurity and more specializedareas such as Operational Technology and related cybersecurity standards. Concurrently, we propose and validate a novel Proposer-Verifier framework, a methodology to generate interpretable,natural language explanations for model performance. Our evaluation shows that frontier modelsachieve human expert level in general networking and IT security knowledge. However, theiraccuracy declines in questions that require vendor-specific nuances or knowledge in formalstandards, like, e.g., IEC 62443.


Originalveröffentlichung
DOI: 10.48550/arXiv.2604.20389
Zugehörige Institution(en) am KIT Institut für Automation und angewandte Informatik (IAI)
Publikationstyp Forschungsbericht/Preprint
Publikationsjahr 2026
Sprache Englisch
Identifikator KITopen-ID: 1000192600
HGF-Programm 46.23.02 (POF IV, LK 01) Engineering Security for Energy Systems
Verlag arxiv
Umfang 27 S.
Vorab online veröffentlicht am 22.04.2026
Schlagwörter Cryptography and Security (cs.CR), Artificial Intelligence (cs.AI)
Nachgewiesen in arXiv
KIT – Die Universität in der Helmholtz-Gemeinschaft
KITopen Landing Page