KIT | KIT-Bibliothek | Impressum | Datenschutz

Performance-Detective: Automatic Deduction of Cheap and Accurate Performance Models - Supplementary Material

Schmid, Larissa ORCID iD icon 1
1 Karlsruher Institut für Technologie (KIT)


Zugehörige Institution(en) am KIT Institut für Informationssicherheit und Verlässlichkeit (KASTEL)
Publikationstyp Forschungsdaten
Publikationsdatum 13.05.2022
Erstellungsdatum 10.05.2022
Identifikator DOI: 10.5445/IR/1000146001
KITopen-ID: 1000146001
Lizenz Creative Commons Namensnennung 4.0 International
Liesmich

Supplementary material for "Performance-Detective: Automatic Deduction of Cheap and Accurate Performance Models" (DOI 10.1145/3524059.3532391)

Step 1: System analysis

We provide the processed JSON output of Perf-Taint for the Pace3D and Kripke case studies, as well as the bitcode of Kripke used as input of Perf-Taint. Because Pace3D is closed source, we cannot provide the source code to reproduce the analysis results.

Step 2: Experiment design

We provide scripts that:

  • check the output of Perf-Taint for parameters not interacting with each other,
  • detect iteration parameters potentially linearly influencing the runtime of the calculation
  • check the coefficient of variation of single iterations to confirm the linear influence

Step 3: Instrumented experiments

We provide the profiles of all measurements conducted for the training points.

For Pace3D, we had to add one function to the instrumentation filter manually because of a bug in Score-P: Functions declared as static inline inside a header file which are then called from different translation units are not instrumented correctly by Score-P. Thus, the resulting profiles are incorrect. However, we can work around this bug by including the function calling the static inline function.

Training Data Pace3D

Training Data Performance-Detective

Performance-Detective derived a minimal experiment design with 25 configurations to be measured once, resulting in 25 measurements.

The data for Extra-P is splitted into two folders that contain the same measurements that are labeled differently. As we did not implement modeling for the optimized subset, we later create two models (one using procs and vol, one using procs and cubes) and extract the model of each function from the respective model (depending on whether the functions relies on vol OR on cubes).

The data for PIM is the same as in the Extra-P folder, just labeled with all the values measured.

Training Data Full-Factorial

625 measurements for all possible combinations of the five values of procs, vol, and cubes. Each measurement was repeated 5 times.

Training Data Plackett-Burman

49 samples selected using a random seed and 5 levels -- effectively a subset of the full-factorial measurements. Each measurement was repeated 5 times.

Training Data Kripke

Training Data Performance-Detective

Performance-Detective derived a minimal experiment design with 5 configurations to be measured once, resulting in 5 measurements.

The data for Extra-P is splitted into two folders that contain the same measurements that are labeled differently. As we did not implement modeling for the optimized subset, we later create two models (one for procs, one for dirsets) and extract the model of each function from the respective model (depending on whether the functions relies on procs OR on dirsets). However, for the single-parameter modeler, modeling based on dependencies from Perf-Taint is not yet implemented. Therefore, the measurements are labeled as two parameter-experiments following the policy detailed in the respective folder.

The data for PIM is the same as in the Extra-P folder, just labeled with all the values measured.

Training Data Full-Factorial

125 measurements for all possible combinations of the five values of procs and dirsets. Each measurements was repeated 5 times.

Training Data Plackett-Burman

10 samples selected using a random seed and 5 levels -- effectively a subset of the full-factorial measurements. Each measurement was repeated 5 times.

Step 4: Modeling and Evaluation

Data of each case study is in the respective folder.

Evaluation Measurements

Extra- and interpolated measurement data in the respective folder.

Extra-P

Contains script for calculating the model errors regarding extra- and interpolated evaluation measurements.

Models

Contains Extra-P models as well as a converter from Extra-P format to text (creates PerformanceModel.py), that is used in calculate_model_errors.py. It requires an installation of Extra-P to run.

Performance-Influence Models

We use the scripts by Weber et al., partly modified to account for the configurations of the case studies and modeling based on known dependencies to functions.

Data

Contains the data of the cubex files in csv format. Data is parsed with create_csv_from_cubex.py.

Model errors

Contains the csv files with the model errors of the respective models. Files are generated by learn_method_level_model_with_deps.py and learn_method_level_without_deps.py, respectively.

License

The files calculate_model_errors.py, extrap_to_text.py, and create_csv_from_cubex.py are modified parts of the Extra-P Software (cf. cube_file_reader2.py and the license file LICENSE-BSD3-EXTRAP).

The files learn_method_level_model_with_deps.py and learn_method_level_model_without_deps.py are modified parts of the supplementary material of Weber et al.. The license file LICENSE-GPL-PIM applies to these files.

Art der Forschungsdaten Dataset
Relationen in KITopen
KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft
KITopen Landing Page