Configuration¶
The configuration file is in the format of a .json file. Most settings must be set explicitly in the configuration file.
The configuration file is split in several sections. There are no requirements for ordering within these sections, or for ordering of the sections themselves.
General¶
- wandb_project_name
The name of the weights and biases project. Weights and biases is setup in the project for experiment tracking and logging.
- experiment_name
The name of the weights and biases experiment.
- experiment_tags
List of tags to place on the run in weights and biases.
- checkpoint_dir
The local directory where the model checkpoints are stored.
- batch_size
Number of training examples passed in one training step.
- epochs
The number of full dataset forward iterations the model should be trained with.
- num_workers
Number of workers processes for the data loading. If
0
, multi-process data loading is disabled.
- gpus
The number of gpus to be used. If
0
, the training runs on CPU.
- prediction_count
The number of sample predictions to be generated for the validation set. This option is intended for use cases where you want to assess a model’s quality using sample predictions.
- prediction_dir
The local directory to store the sample predictions.
- random_state
Constant to ensure reproducibility of random operations.
[model_config] section¶
The model_config
section specifies parameters to setup the segmentation model architecture and losses.
- architecture
The name of the architecture to use. Allowable values are:
"u_net"
.
Note
If the model architecture of your choice is not yet included in the framework,
it can be added by subclassing models.pytorch_model.PytorchModel()
.
- optimizer
The name of the algorithm used to calculate the loss and update the weights. Allowable values are:
"adam"
and"sgd"
(gradient descent).
- loss_config
Dictionary with loss parameters. Mandatory is the key
"type"
with one of the allowable values:"cross_entropy"
,"dice"
,"cross_entropy_dice"
,"general_dice"
,"fp"
,"fp_dice"
,"focal"
. More detailed documentation and configuration options of the losses can be looked up infunctional.losses
.
- learning_rate
The step size at each iteration while moving towards a minimum of the loss. Defaults to
0.0001
.
- num_levels
Number of levels (encoder and decoder blocks) in the U-Net. Defaults to
4
.
- dim
The dimensionality of the U-Net. Allowable values are:
2
and3
. Defaults to2
.
- model_selection_criterion
The criterion for selecting the best model for checkpointing. Defaults to
"loss"
.
- train_metrics
A list with the names of the metrics that should be computed and logged in each training and validation epoch of the training loop. Available options:
"dice_score"
,"sensitivity"
,"specificity"
,"hausdorff95"
. Defaults to["dice_score"]
.
- train_metric_confidence_levels
A list of confidence levels for which the metrics specified in the train_metrics parameter should be computed in the training loop (
trainer.fit()
). This parameter is used only for multi-label classification tasks. Defaults to[0.5]
.
- test_metrics
A list with the names of the metrics that should be computed and logged in the model validation or testing loop (
trainer.validate()
,trainer.test()
). Available options:"dice_score"
,"sensitivity"
,"specificity"
,"hausdorff95"
. Defaults to["dice_score", "sensitivity", "specificity", "hausdorff95"]
.
- test_metric_confidence_levels
A list of confidence levels for which the metrics specified in the test_metrics parameter should be computed in the validation or testing loop. This parameter is used only for multi-label classification tasks. Defaults to
[0.5]
.
[dataset_config] section¶
The dataset_config
section specifies parameters to setup the dataset and data loading.
- dataset
The name of the dataset to use. Allowable values are:
"brats"
,"decathlon"
and"bcss"
.
- data_dir
The directory where the data of the selected dataset resides.
- cache_size
Number of images to keep in memory between epochs to speed-up data loading. Defaults to
0
.
Note
Further mandatory or optional fields can be found in the documentation of the respective data module.
Available data modules as of now are datasets.decathlon_data_module.DecathlonDataModule()
,
datasets.brats_data_module.BraTSDataModule()
and datasets.bcss_data_module.BCSSDataModule()
.
[active_learning_config] section¶
The active_learning_config
section specifies parameters to run the active learning loop.
- active_learning_mode
Enable/Disabled Active Learning Pipeline. Defaults to
False
. IfFalse
, the model is trained on the full training dataset.
- reset_weights
Enable/Disable resetting of weights after every active learning iteration. Defaults to
False
.
- initial_training_set_size
Initial size of the training set if the active learning mode is activated. Defaults to
1
.
- iterations
Iteration times how often the active learning pipeline should be executed. If
None
, the active learning pipeline is run until the whole dataset is labeled. Defaults toNone
.
- items_to_label
Number of items that should be selected for labeling in each active learning iteration. Defaults to
1
.
- batch_size_unlabeled_set
Batch size for the unlabeled set. Defaults to batch_size.
- heatmaps_per_iteration
Number of heatmaps to be generated per active learning iteration. This option is intended for uses cases where you want to assess the quality of a sampling strategy using heatmaps of the model’s predictions. Defaults to
0
.
[strategy_config] section¶
The strategy_config
section specifies parameters to setup the strategy to query for new examples to be labeled.
- type
Name of the sampling strategy to use. Allowable values are:
"random"
,"interpolation"
,"uncertainty"
,"representativeness_distance"
,"representativeness_clustering"
and"representativeness_uncertainty"
.
- description
Detailed description about the configuration of the strategy. The information is logged to make experiments clearer.
Note
Further mandatory or optional fields can be found in the documentation of the respective strategy. Available strategies and their documentations can be found in the query_strategies package.