KIT | KIT-Bibliothek | Impressum | Datenschutz

Experimental Data for the Paper "Finding Optimal Diverse Feature Sets with Alternative Feature Selection" (Version 3)

Bach, Jakob ORCID iD icon

Abstract:

These are the experimental data for the third version (v3) of the paper

> Bach, Jakob. "Finding Optimal Diverse Feature Sets with Alternative Feature Selection"

This version of the paper was published on [arXiv](https://arxiv.org/) in 2025.
You can find the paper [here](https://doi.org/10.48550/arXiv.2307.11607) and the code [here](https://github.com/jakob-bach/alternative-feature-selection).
See the `README` for details.

The datasets used in our study (which we also provide here) originate from [PMLB](https://epistasislab.github.io/pmlb/).
The corresponding [GitHub repository](https://github.com/EpistasisLab/pmlb) is MIT-licensed ((c) 2016 Epistasis Lab at UPenn).
Please see the file `LICENSE` in the folder `datasets/` for the license text.

Zugehörige Institution(en) am KIT Institut für Programmstrukturen und Datenorganisation (IPD)
Publikationstyp Forschungsdaten
Publikationsdatum 28.01.2025
Erstellungsdatum 28.09.2024 - 08.12.2024
Identifikator DOI: 10.35097/4ttgrpx92p30jwww
KITopen-ID: 1000178448
Lizenz Creative Commons Namensnennung 4.0 International
Externe Relationen Forschungsdaten/Software
Abstract/Volltext
Schlagwörter feature selection, alternatives, constraints, mixed-integer programming, explainability, interpretability, XAI
Liesmich

Experimental Data for the Paper "Finding Optimal Diverse Feature Sets with Alternative Feature Selection" (Version 3)

These are the experimental data for the third version (v3) of the paper

> Bach, Jakob. "Finding Optimal Diverse Feature Sets with Alternative Feature Selection"

published on arXiv in 2025.
If we create further versions of this paper in the future, these experimental data may cover them as well.

Check our GitHub repository for the code and instructions to reproduce the experiments.
We obtained the experimental results on a server with an AMD EPYC 7551 CPU (32 physical cores, base clock of 2.0 GHz) and 160 GB RAM.
The operating system was Ubuntu 20.04.6 LTS.
The Python version was 3.8.
With this configuration, running the experimental pipeline (run_experiments.py) took about 249 hours.

The commit hash for the last run of the experimental pipeline (run_experiments.py) is b4083afcc4.
The commit hash for the last run of the evaluation pipeline (run_evaluation_arxiv.py) is 7d027c2382.
We also tagged both commits (run-2024-09-28-arXiv-v3 and evaluation-2024-12-08-arXiv-v3).

The experimental data are stored in three folders, i.e., datasets/, plots/, and results/.
Further, the console output of run_evaluation_arxiv.py is stored in Evaluation_console_output.txt (manually copied from the console to a file).
In the following, we describe the structure and content of each data file.

datasets/

These are the input data for the experimental pipeline run_experiments.py, i.e., the prediction datasets.
The folder contains one overview file, one license file, and two files for each of the 30 datasets.

The original datasets were downloaded from PMLB with the script prepare_datasets.py.
Note that we do not own the copyright for these datasets.
However, the GitHub repository of PMLB, which stores the original datasets, is MIT-licensed ((c) 2016 Epistasis Lab at UPenn).
Thus, we include the file LICENSE from that repository.

After downloading from PMLB, we split each dataset into the feature part (_X.csv) and the target part (_y.csv), which we save separately.
Both file types are CSVs that only contain numeric values (categorical features are ordinally encoded in PMLB) except for the column names.
There are no missing values.
Each row corresponds to a data object (= instance, sample), and each column either corresponds to a feature (in _X) or the target (in _y).
The first line in each _X file contains the names of the features as strings; for _y files, there is only one column, always named target.

_dataset_overview.csv contains meta-data for the datasets, like the number of instances and features.

plots/

These are the output files of the evaluation pipeline run_evaluation_arxiv.py.
We include these plots in our paper.

results/

These are the output data of the experimental pipeline in the form of CSVs, produced by the script run_experiments.py.
_results.csv contains all results merged into one file and acts as input for the script run_evaluation_arxiv.py.
The remaining files are subsets of the results, as the experimental pipeline parallelizes over 30 datasets, 5 cross-validation folds, and 5 feature-selection methods.
Thus, there are 30 * 5 * 5 = 750 files containing subsets of the results.

Each row in a result file corresponds to one feature set.
One can identify individual search runs for alternatives with a combination of multiple columns, i.e.:

  • dataset dataset_name
  • cross-validation fold split_idx
  • feature-selection method fs_name
  • search method search_name
  • objective aggregation objective_agg
  • feature-set size k
  • number of alternatives num_alternatives
  • dissimilarity threshold tau_abs

The remaining columns mostly represent evaluation metrics.
In detail, all result files contain the following columns:

  • selected_idxs (list of non-negative ints, e.g., [0, 4, 5, 6, 8]): The indices (starting from 0) of the selected features (i.e., columns in the corresponding dataset).
    Might also be an empty list, i.e., [], if no valid solution was found.
    In that case, the two _objective columns and the four _mcc columns contain a missing value (empty string).
  • train_objective (float in [-1, 1] + missing values): The training-set objective value of the feature set.
    Three feature-selection methods (FCBF, MI, Model Importance) have the range [0, 1], while two methods (mRMR, Greedy Wrapper) have the range [-1, 1].
  • test_objective (float in [-1, 1] + missing values): The test-set objective value of the feature set.
  • optimization_time (non-negative float): Time for alternative feature selection in seconds.
    The interpretation of this value depends on the search method for alternatives and the feature-selection method.
    (1a) In solver-based search for alternatives in combination with white-box feature-selection methods, this value corresponds to one solver call.
    (1b) In solver-based search for alternatives in combination with wrapper feature selection, we record the total runtime of the Greedy Wrapper algorithm, which calls the solver and trains prediction models multiple times.
    (2) In heuristic search for alternatives (algorithms Greedy Balancing and Greedy Replacement), we record the total runtime of the heuristic search algorithms.
  • optimization_status (int in {0, 1, 2, 6}): The status of the solver-based or heuristic search method for alternatives.
    For Greedy Wrapper, this is only the status of the last solver call (last iteration) and refers to optimizing similarity to the previous best solution (under swap constraints) rather than optimizing feature-set quality.
    • 0 = (proven as) optimal; cannot occur for heuristic search methods
    • 1 = feasible (valid solution, but might be suboptimal)
    • 2 = (proven as) infeasible; cannot occur for heuristic search methods
    • 6 = not solved (no valid solution found, but one might exist)
  • decision_tree_train_mcc (float in [-1, 1] + missing values): Training-set prediction performance (in terms of Matthews Correlation Coefficient) of a decision tree trained with the selected features.
  • decision_tree_test_mcc (float in [-1, 1] + missing values): Test-set prediction performance (in terms of Matthews Correlation Coefficient) of a decision tree trained with the selected features.
  • random_forest_train_mcc (float in [-1, 1] + missing values): Training-set prediction performance (in terms of Matthews Correlation Coefficient) of a random forest trained with the selected features.
    Not evaluated in the current version of our paper, as the observed trends are similar to those with decision trees.
  • random_forest_test_mcc (float in [-1, 1] + missing values): Test-set prediction performance (in terms of Matthews Correlation Coefficient) of a random forest trained with the selected features.
    Not evaluated in the current version of our paper, as the observed trends are similar to those with decision trees.
  • k (int in {5, 10}): The number of features to be selected.
  • tau_abs (int in [1, 10]): The dissimilarity threshold for alternatives, corresponding to the absolute number of features (k * tau) that have to differ between feature sets.
  • num_alternatives (int in {1, 2, 3, 4, 5, 10}): The number of desired alternative feature sets, not counting the original (zeroth) feature set.
    A number from {1, 2, 3, 4, 5} for solver-based simultaneous search and Greedy Balancing, but always 10 for solver-based sequential search and Greedy Replacement.
  • objective_agg (string, 2 different values): The name of the quality-aggregation function for alternatives (min or sum).
    Min-aggregation or sum-aggregation for solver-based simultaneous search but always sum-aggregation for the remaining search methods
    (where the value of this parameter does not matter for the search but needs to be specified for compatibility reasons).
  • search_name (string, 4 different values):
    The name of the search method for alternatives (search_greedy_balancing, search_greedy_replacement, search_sequentially, or search_simultaneously).
    Greedy Balancing and Greedy Replacement are (solver-free) heuristics and are only combined with two feature-selection methods (MI and Model Importance).
    The other two values denote solver-based search here (though in the paper, we also categorize optimization problems as sequential/simultaneous, no matter how they are solved) and are combined with all five feature-selection methods.
  • fs_name (string, 5 different values): The name of the feature-selection method (FCBFSelector, MISelector, ModelImportanceSelector (= Model Gain in the paper), MRMRSelector, or GreedyWrapperSelector).
  • dataset_name (string, 30 different values): The name of the PMLB dataset.
  • n (positive int): The number of features of the PMLB dataset.
  • split_idx (int in [0, 4]): The index of the cross-validation fold.
  • wrapper_iters (int in [1, 1000] + missing values): The number of iterations if Greedy Wrapper is used for feature selection, missing value (empty string) in the other cases.
    This column does not exist in result files that do not contain wrapper results.

You can easily read in any of the result files with pandas:

import pandas as pd

results = pd.read_csv('results/_results.csv')

All result files are comma-separated and contain plain numbers and unquoted strings, apart from the column selected_features (which is quoted and represents lists of integers).
The first line in each result file contains the column names.
You can use the following code to make sure that the lists of feature indices are treated as lists (rather than plain strings):

import ast

results['selected_idxs'] = results['selected_idxs'].apply(ast.literal_eval)
Art der Forschungsdaten Dataset
Relationen in KITopen

Seitenaufrufe: 15
seit 29.01.2025
KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft
KITopen Landing Page