KIT | KIT-Bibliothek | Impressum | Datenschutz

NoRBERT for Requirements Classification in Traceability Link Recovery Datasets

Hey, Tobias ORCID iD icon 1,2
1 Institut für Programmstrukturen und Datenorganisation (IPD), Karlsruher Institut für Technologie (KIT)
2 Institut für Informationssicherheit und Verlässlichkeit (KASTEL), Karlsruher Institut für Technologie (KIT)

Abstract:

This repository contains all code needed to train and evaluate binary classifiers for predicting the functional aspects present in a requirement element. It is based on NoRBERT and applies it to a labeled dataset combined of the relabeled Promise NFR dataset and a labeled dataset of requirement elements from five traceability link recovery benchmark datasets.


Download
Originalveröffentlichung
DOI: 10.5281/zenodo.8348363
Zugehörige Institution(en) am KIT Institut für Programmstrukturen und Datenorganisation (IPD)
Publikationstyp Forschungsdaten
Publikationsjahr 2023
Identifikator KITopen-ID: 1000162448
Lizenz GNU General Public License v3.0 or later
Liesmich

Requirements

You need to install Jupyter. Furthermore, you might have to install further python dependencies than the ones installed in the notebook (first cell) depending on your python installation. You have to make sure that you installed all python libraries that are imported in the second cell via pip. It is necessary to install PyTorch. You will need a machine with a very potent GPU (at least 12GB GPU RAM for the large models) as the pretrained BERT model is very memory hungry. Also, you have to make sure that your GPU and drivers support CUDA. We recommend Ubuntu as operating system.

We used Python 3.7 and corresponding dependency versions. For special requirements, see requirements.txt.

Docker Image

As GPU support depends on the installed drivers and CUDA version, the provided docker image NoRBERT_for_TLR_docker_image.tar.gz only supports CPU and thus has to be updated with CUDA support.

You can load the image with

docker image load < NoRBERT_for_TLR_docker_image.tar.gz

and run with

docker run --rm -p 8888:8888 tobhey/norbert_for_tlr

Attribution (of datasets used)

The relabeled Promise dataset can be attributed to Dalpiaz et al.: F. Dalpiaz, D. Dell’Anna, F. B. Aydemir, and S. Çevikol, “explainable-re/re-2019-materials,” Jul.2019. https://doi.org/10.5281/zenodo.3309669

The TLR dataset comprises preprocessed requirements of the eTour, iTrust, SMOS, eAnci and LibEST datasets. As SMOS and eAnci's original requirements were written in Italian, the dataset comprises automatically translated versions of the requirements to English. The datasets were retrieved from the website of the Center of Excellence for Software & Systems Traceability (CoEST).

Attribution for the datasets:

The original eTour dataset was provided for the TEFSE challenge at 6th International Workshop on Traceability in Emerging Forms of Software Engineering (TEFSE), 2011 and was retrieved from http://coest.org/

The iTrust dataset was retrieved from http://coest.org/

The original SMOS and eAnci datasets can be attributed to Gethers et al., On integrating orthogonal information retrieval methods to improve traceability recovery. In 2011 27th IEEE International Conference on Software Maintenance (ICSM), Sep. 2011 and were retrieved from http://coest.org/

The LibEST dataset can be attributed to Moran et al., Improving the Effectiveness of Traceability Link Recovery using Hierarchical Bayesian Networks. In 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE), May 2020 and was retrieved from https://gitlab.com/SEMERU-Code-Public/Data/icse20-comet-data-replication-package

Art der Forschungsdaten Software
Relationen in KITopen
KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft
KITopen Landing Page