Shortcuts

main module

Main module to execute active learning pipeline from CLI

main.create_data_module(dataset, data_dir, batch_size, num_workers, random_state, active_learning_config, dataset_config)[source]

Creates the correct data module.

Parameters
  • dataset (string) – Name of the dataset. E.g. ‘brats’

  • data_dir (string, optional) – Main directory with the dataset. E.g. ‘./data’

  • batch_size (int, optional) – Size of training examples passed in one training step.

  • num_workers (int, optional) – Number of workers.

  • random_state (int) – Random constant for shuffling the data

  • active_learning_config (Dict[str, Any) – Dictionary with active learning specific parameters.

  • dataset_config (Dict[str, Any]) – Dictionary with dataset specific parameters.

Returns

The data module.

main.create_model(data_module, architecture, learning_rate, lr_scheduler, num_levels, model_config, loss_weight_scheduler_max_steps=None)[source]

Creates the specified model.

Parameters
  • data_module (ActiveLearningDataModule) – A data module object providing data.

  • architecture (string) – Name of the desired model architecture. E.g. ‘u_net’.

  • learning_rate (float) – The step size at each iteration while moving towards a minimum of the loss.

  • lr_scheduler (string, optional) – Algorithm used for dynamically updating the learning rate during training. E.g. ‘reduceLROnPlateau’ or ‘cosineAnnealingLR’

  • num_levels (int, optional) – Number levels (encoder and decoder blocks) in the U-Net. Defaults to 4.

  • model_config (Dict[str, Any], optional) – Dictionary with model specific parameters.

  • loss_weight_scheduler_max_steps (int, optional) – Number of steps for pseudo-label loss weight scheduler.

Returns

The model.

main.create_query_strategy(strategy_config)[source]

Initialises the chosen query strategy.

Parameters

strategy_config (dict) – Configuration of the query strategy

main.run_active_learning_pipeline(architecture, dataset, strategy_config, experiment_name, batch_size=16, checkpoint_dir=None, data_dir='./data', dataset_config=None, model_config=None, model_selection_criterion='mean_dice_score_0.5', active_learning_config=None, epochs=50, experiment_tags=None, gpus=1, num_workers=4, learning_rate=0.0001, lr_scheduler=None, num_levels=4, prediction_count=None, prediction_dir='./predictions', wandb_project_name='active-segmentation', early_stopping=False, random_state=42, deterministic_mode=True, save_model_every_epoch=False, clear_wandb_cache=False)[source]

Main function to execute an active learning pipeline run, or start an active learning simulation.

Parameters
  • architecture (string) – Name of the desired model architecture. E.g. ‘u_net’.

  • dataset (string) – Name of the dataset. E.g. ‘brats’

  • strategy_config (dict) – Configuration of the query strategy.

  • experiment_name (string) – Name of the experiment.

  • batch_size (int, optional) – Size of training examples passed in one training step.

  • checkpoint_dir (str, optional) – Directory where the model checkpoints are to be saved.

  • data_dir (string, optional) – Main directory with the dataset. E.g. ‘./data’

  • dataset_config (Dict[str, Any], optional) – Dictionary with dataset specific parameters.

  • model_config (Dict[str, Any], optional) – Dictionary with model specific parameters.

  • active_learning_config (Dict[str, Any], optional) – Dictionary with active learning specific parameters.

  • epochs (int, optional) – Number of iterations with the full dataset.

  • experiment_tags (Iterable[string], optional) – Tags with which to label the experiment.

  • gpus (int) – Number of GPUS to use for model training.

  • num_workers (int, optional) – Number of workers.

  • learning_rate (float) – The step size at each iteration while moving towards a minimum of the loss.

  • lr_scheduler (string, optional) – Algorithm used for dynamically updating the learning rate during training. E.g. ‘reduceLROnPlateau’ or ‘cosineAnnealingLR’

  • num_levels (int, optional) – Number levels (encoder and decoder blocks) in the U-Net. Defaults to 4.

  • early_stopping (bool, optional) – Enable/Disable Early stopping when model is not learning anymore (default = False).

  • random_state (int) – Random constant for shuffling the data

  • wandb_project_name (string, optional) – Name of the project that the W&B runs are stored in.

  • deterministic_mode (bool, optional) – Whether only deterministic CUDA operations should be used. Defaults to True.

  • save_model_every_epoch (bool, optional) – Whether the model files of all epochs are to be saved or only the model file of the best epoch. Defaults to False.

  • clear_wandb_cache (bool, optional) – Whether the whole Weights and Biases cache should be deleted when the run is finished. Should only be used when no other runs are running in parallel. Defaults to False.

Returns

None.

main.run_active_learning_pipeline_from_config(config_file_name, hp_optimisation=False)[source]

Runs the active learning pipeline based on a config file.

Parameters
  • config_file_name – Name of or path to the config file.

  • hp_optimisation – If this flag is set, run the pipeline with different hyperparameters based on the configured sweep file

Docs

Access comprehensive developer documentation for Active Segmentation

View Docs