secmlt.adv.evasion package

Subpackages

Submodules

secmlt.adv.evasion.base_evasion_attack module

Base classes for implementing attacks and wrapping backends.

class secmlt.adv.evasion.base_evasion_attack.BaseEvasionAttack[source]

Bases: object

Base class for evasion attacks.

__call__(model: BaseModel, data_loader: torch.utils.data.DataLoader) torch.utils.data.DataLoader[source]

Compute the attack against the model, using the input data.

Parameters:
  • model (BaseModel) – Model to test.

  • data_loader (DataLoader) – Test dataloader.

Returns:

Dataloader with adversarial examples and original labels.

Return type:

DataLoader

classmethod check_perturbation_model_available(perturbation_model: str) bool[source]

Check whether the given perturbation model is available for the attack.

Parameters:

perturbation_model (str) – A perturbation model.

Returns:

True if the attack implements the given perturbation model.

Return type:

bool

Raises:

NotImplementedError – Raises NotImplementedError if not implemented in the inherited class.

abstract static get_perturbation_models() set[str][source]

Check the perturbation models implemented for the given attack.

Returns:

The set of perturbation models for which the attack is implemented.

Return type:

set[str]

Raises:

NotImplementedError – Raises NotImplementedError if not implemented in the inherited class.

property trackers: list[secmlt.trackers.tracker.Tracker] | None

Get the trackers set for this attack.

Returns:

Trackers set for the attack, if any.

Return type:

list[TRACKER_TYPE] | None

class secmlt.adv.evasion.base_evasion_attack.BaseEvasionAttackCreator[source]

Bases: object

Generic creator for attacks.

classmethod check_backend_available(backend: str) bool[source]

Check if a given backend is available for the attack.

Parameters:

backend (str) – Backend string.

Returns:

True if the given backend is implemented.

Return type:

bool

Raises:

NotImplementedError – Raises NotImplementedError if the requested backend is not in the list of the possible backends (check secmlt.adv.backends).

classmethod get_advlib_implementation() BaseEvasionAttack[source]

Get the Adversarial Library implementation of the attack.

Returns:

Adversarial Library implementation of the attack.

Return type:

BaseEvasionAttack

Raises:

ImportError – Raises ImportError if Adversarial Library extra is not installed.

abstract static get_backends() set[str][source]

Get the available backends for the given attack.

Returns:

Set of implemented backends available for the attack.

Return type:

set[str]

Raises:

NotImplementedError – Raises NotImplementedError if not implemented in the inherited class.

classmethod get_foolbox_implementation() BaseEvasionAttack[source]

Get the Foolbox implementation of the attack.

Returns:

Foolbox implementation of the attack.

Return type:

BaseEvasionAttack

Raises:

ImportError – Raises ImportError if Foolbox extra is not installed.

classmethod get_implementation(backend: str) BaseEvasionAttack[source]

Get the implementation of the attack with the given backend.

Parameters:

backend (str) – The backend for the attack. See secmlt.adv.backends for available backends.

Returns:

Attack implementation.

Return type:

BaseEvasionAttack

secmlt.adv.evasion.modular_attack module

Implementation of modular iterative attacks with customizable components.

class secmlt.adv.evasion.modular_attack.ModularEvasionAttackFixedEps(y_target: int | None, num_steps: int, step_size: float, loss_function: str | torch.nn.Module, optimizer_cls: str | partial[torch.optim.Optimizer], manipulation_function: Manipulation, initializer: Initializer, gradient_processing: GradientProcessing, trackers: list[Tracker] | Tracker | None = None)[source]

Bases: BaseEvasionAttack

Modular evasion attack for fixed-epsilon attacks.

__init__(y_target: int | None, num_steps: int, step_size: float, loss_function: str | torch.nn.Module, optimizer_cls: str | partial[torch.optim.Optimizer], manipulation_function: Manipulation, initializer: Initializer, gradient_processing: GradientProcessing, trackers: list[Tracker] | Tracker | None = None) None[source]

Create modular evasion attack.

Parameters:
  • y_target (int | None) – Target label for the attack, None for untargeted.

  • num_steps (int) – Number of iterations for the attack.

  • step_size (float) – Attack step size.

  • loss_function (str | torch.nn.Module) – Loss function to minimize.

  • optimizer_cls (str | partial[Optimizer]) – Algorithm for solving the attack optimization problem.

  • manipulation_function (Manipulation) – Manipulation function to perturb the inputs.

  • initializer (Initializer) – Initialization for the perturbation delta.

  • gradient_processing (GradientProcessing) – Gradient transformation function.

  • trackers (list[Tracker] | Tracker | None, optional) – Trackers for logging, by default None.

Raises:

ValueError – Raises ValueError if the loss is not in allowed list of loss functions.

forward_loss(model: BaseModel, x: torch.Tensor, target: torch.Tensor) tuple[torch.Tensor, torch.Tensor][source]

Compute the forward for the loss function.

Parameters:
  • model (BaseModel) – Model used by the attack run.

  • x (torch.Tensor) – Input sample.

  • target (torch.Tensor) – Target for computing the loss.

Returns:

Output scores and loss.

Return type:

tuple[torch.Tensor, torch.Tensor]

classmethod get_perturbation_models() set[str][source]

Check if a given perturbation model is implemented.

Returns:

Set of perturbation models available for this attack.

Return type:

set[str]

property manipulation_function: Manipulation

Get the manipulation function for the attack.

Returns:

The manipulation function used in the attack.

Return type:

Manipulation

secmlt.adv.evasion.perturbation_models module

Implementation of perturbation models for perturbations of adversarial examples.

class secmlt.adv.evasion.perturbation_models.LpPerturbationModels[source]

Bases: object

Lp perturbation models.

L0 = 'l0'
L1 = 'l1'
L2 = 'l2'
LINF = 'linf'
classmethod get_p(perturbation_model: str) float[source]

Get the float representation of p from the given string.

Parameters:

perturbation_model (str) – One of the strings defined in PerturbationModels.pert_models.

Returns:

The float representation of p, to use. e.g., in torch.norm(p=…).

Return type:

float

Raises:

ValueError – Raises ValueError if the norm given is not in PerturbationModels.pert_models

classmethod is_perturbation_model_available(perturbation_model: str) bool[source]

Check availability of the perturbation model requested.

Parameters:

perturbation_model (str) – A perturbation model as a string.

Returns:

True if the perturbation model is found in PerturbationModels.pert_models.

Return type:

bool

pert_models: ClassVar[dict[str, float]] = {'l0': 0, 'l1': 1, 'l2': 2, 'linf': inf}

secmlt.adv.evasion.pgd module

Implementations of the Projected Gradient Descent evasion attack.

class secmlt.adv.evasion.pgd.PGD(perturbation_model: str, epsilon: float, num_steps: int, step_size: float, random_start: bool = False, y_target: int | None = None, lb: float = 0.0, ub: float = 1.0, backend: str = 'foolbox', trackers: list[Tracker] | None = None, **kwargs)[source]

Bases: BaseEvasionAttackCreator

Creator for the Projected Gradient Descent (PGD) attack.

static __new__(cls, perturbation_model: str, epsilon: float, num_steps: int, step_size: float, random_start: bool = False, y_target: int | None = None, lb: float = 0.0, ub: float = 1.0, backend: str = 'foolbox', trackers: list[Tracker] | None = None, **kwargs) BaseEvasionAttack[source]

Create the PGD attack.

Parameters:
  • perturbation_model (str) – Perturbation model for the attack. Available: 1, 2, inf.

  • epsilon (float) – Radius of the constraint for the Lp ball.

  • num_steps (int) – Number of iterations for the attack.

  • step_size (float) – Attack step size.

  • random_start (bool, optional) – Whether to use a random initialization onto the Lp ball, by default False.

  • y_target (int | None, optional) – Target label for a targeted attack, None for untargeted attack, by default None.

  • lb (float, optional) – Lower bound of the input space, by default 0.0.

  • ub (float, optional) – Upper bound of the input space, by default 1.0.

  • backend (str, optional) – Backend to use to run the attack, by default Backends.FOOLBOX

  • trackers (list[Tracker] | None, optional) – Trackers to check various attack metrics (see secmlt.trackers), available only for native implementation, by default None.

Returns:

PGD attack instance.

Return type:

BaseEvasionAttack

static get_backends() list[str][source]

Get available implementations for the PGD attack.

class secmlt.adv.evasion.pgd.PGDNative(perturbation_model: str, epsilon: float, num_steps: int, step_size: float, random_start: bool, y_target: int | None = None, lb: float = 0.0, ub: float = 1.0, trackers: list[Tracker] | None = None, **kwargs)[source]

Bases: ModularEvasionAttackFixedEps

Native implementation of the Projected Gradient Descent attack.

__init__(perturbation_model: str, epsilon: float, num_steps: int, step_size: float, random_start: bool, y_target: int | None = None, lb: float = 0.0, ub: float = 1.0, trackers: list[Tracker] | None = None, **kwargs) None[source]

Create Native PGD attack.

Parameters:
  • perturbation_model (str) – Perturbation model for the attack. Available: 1, 2, inf.

  • epsilon (float) – Radius of the constraint for the Lp ball.

  • num_steps (int) – Number of iterations for the attack.

  • step_size (float) – Attack step size.

  • random_start (bool) – Whether to use a random initialization onto the Lp ball.

  • y_target (int | None, optional) – Target label for a targeted attack, None for untargeted attack, by default None.

  • lb (float, optional) – Lower bound of the input space, by default 0.0.

  • ub (float, optional) – Upper bound of the input space, by default 1.0.

  • trackers (list[Tracker] | None, optional) – Trackers to check various attack metrics (see secmlt.trackers), available only for native implementation, by default None.

Module contents

Evasion attack functionalities.