secmlt.adv.poisoning package

Submodules

secmlt.adv.poisoning.backdoor module

Simple backdoor attack in PyTorch.

class secmlt.adv.poisoning.backdoor.BackdoorDatasetPyTorch(*args: Any, **kwargs: Any)[source]

Bases: PoisoningDatasetPyTorch

Dataset class for adding triggers for backdoor attacks.

__init__(dataset: torch.utils.data.Dataset, data_manipulation_func: callable, trigger_label: int = 0, portion: float | None = None, poisoned_indexes: list[int] | torch.Tensor | None = None) None[source]

Create the backdoored dataset.

Parameters:
  • dataset (torch.utils.data.Dataset) – PyTorch dataset.

  • data_manipulation_func (callable) – Function to manipulate the data and add the backdoor.

  • trigger_label (int, optional) – Label to associate with the backdoored data (default 0).

  • portion (float, optional) – Percentage of samples on which the backdoor will be injected (default 0.1).

  • poisoned_indexes (list[int] | torch.Tensor) – Specific indexes of samples to perturb. Alternative to portion.

secmlt.adv.poisoning.base_data_poisoning module

Base class for data poisoning.

class secmlt.adv.poisoning.base_data_poisoning.PoisoningDatasetPyTorch(*args: Any, **kwargs: Any)[source]

Bases: Dataset

Dataset class for adding poisoning samples.

__getitem__(idx: int) tuple[torch.Tensor, int][source]

Get item from the dataset.

Parameters:

idx (int) – Index of the item to return

Returns:

Item at position specified by idx.

Return type:

tuple[torch.Tensor, int]

__init__(dataset: torch.utils.data.Dataset, data_manipulation_func: callable = <function PoisoningDatasetPyTorch.<lambda>>, label_manipulation_func: callable = <function PoisoningDatasetPyTorch.<lambda>>, portion: float | None = None, poisoned_indexes: list[int] | torch.Tensor | None = None) None[source]

Create the poisoned dataset.

Parameters:
  • dataset (torch.utils.data.Dataset) – PyTorch dataset.

  • data_manipulation_func (callable) – Function that manipulates the data.

  • label_manipulation_func (callable) – Function that returns the label to associate with the poisoned data.

  • portion (float, optional) – Percentage of samples on which the poisoning will be injected (default 0.1).

  • poisoned_indexes (list[int] | torch.Tensor) – Specific indexes of samples to perturb. Alternative to portion.

__len__() int[source]

Get number of samples.

Module contents

Backdoor attacks.