langtest.transform.sensitivity.AddToxicWords#

class AddToxicWords#

Bases: BaseSensitivity

A class for handling sensitivity-related phrases in the input text, specifically related to toxicity.

alias_name#

The alias name for this sensitivity transformation.

Type:

str

transform(

sample_list: List[Sample], starting_context: Optional[List[str]] = None, ending_context: Optional[List[str]] = None, strategy: str = None,

) -> List[Sample]

Transform the input list of samples to add toxicity-related text.

Raises:

ValueError – If an invalid context strategy is provided.

__init__()#

Methods

__init__()

async_run(sample_list, model, **kwargs)

Creates a task to run the sensitivity measure.

run(sample_list, model, **kwargs)

Abstract method that implements the sensitivity measure.

transform(sample_list[, starting_context, ...])

Transform the input list of samples to add toxicity-related text.

Attributes

alias_name

supported_tasks

test_types

async classmethod async_run(sample_list: List[Sample], model: ModelAPI, **kwargs)#

Creates a task to run the sensitivity measure.

Parameters:
  • sample_list (List[Sample]) – The input data to be transformed.

  • model (ModelAPI) – The model to be used for evaluation.

  • **kwargs – Additional arguments to be passed to the sensitivity measure.

Returns:

The task that runs the sensitivity measure.

Return type:

asyncio.Task

abstract async static run(sample_list: List[Sample], model: ModelAPI, **kwargs) List[Sample]#

Abstract method that implements the sensitivity measure.

Parameters:
  • sample_list (List[Sample]) – The input data to be transformed.

  • model (ModelAPI) – The model to be used for evaluation.

  • **kwargs – Additional arguments to be passed to the sensitivity measure.

Returns:

The transformed data based on the implemented sensitivity measure.

Return type:

List[Sample]

static transform(sample_list: List[Sample], starting_context: List[str] | None = None, ending_context: List[str] | None = None, strategy: str | None = None) List[Sample]#

Transform the input list of samples to add toxicity-related text.

Parameters:
  • sample_list (List[Sample]) – A list of samples to transform.

  • starting_context (Optional[List[str]]) – A list of starting context tokens.

  • ending_context (Optional[List[str]]) – A list of ending context tokens.

  • strategy (str) – The strategy for adding context. Can be ‘start’, ‘end’, or ‘combined’.

Returns:

The transformed list of samples.

Return type:

List[Sample]

Raises:

ValueError – If an invalid context strategy is provided.