langtest.transform.sensitivity.AddToxicWords#
- class AddToxicWords#
Bases:
BaseSensitivity
A class for handling sensitivity-related phrases in the input text, specifically related to toxicity.
- alias_name#
The alias name for this sensitivity transformation.
- Type:
str
- transform(
sample_list: List[Sample], starting_context: Optional[List[str]] = None, ending_context: Optional[List[str]] = None, strategy: str = None,
- ) -> List[Sample]
Transform the input list of samples to add toxicity-related text.
- Raises:
ValueError – If an invalid context strategy is provided.
- __init__()#
Methods
__init__
()async_run
(sample_list, model, **kwargs)Creates a task to run the sensitivity measure.
run
(sample_list, model, **kwargs)Abstract method that implements the sensitivity measure.
transform
(sample_list[, starting_context, ...])Transform the input list of samples to add toxicity-related text.
Attributes
supported_tasks
test_types
- class TestConfig#
Bases:
dict
- clear() None. Remove all items from D. #
- copy() a shallow copy of D #
- fromkeys(value=None, /)#
Create a new dictionary with keys from iterable and values set to value.
- get(key, default=None, /)#
Return the value for key if key is in the dictionary, else default.
- items() a set-like object providing a view on D's items #
- keys() a set-like object providing a view on D's keys #
- pop(k[, d]) v, remove specified key and return the corresponding value. #
If the key is not found, return the default if given; otherwise, raise a KeyError.
- popitem()#
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
- setdefault(key, default=None, /)#
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
- update([E, ]**F) None. Update D from dict/iterable E and F. #
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
- values() an object providing a view on D's values #
- async classmethod async_run(sample_list: List[Sample], model: ModelAPI, **kwargs)#
Creates a task to run the sensitivity measure.
- Parameters:
sample_list (List[Sample]) – The input data to be transformed.
model (ModelAPI) – The model to be used for evaluation.
**kwargs – Additional arguments to be passed to the sensitivity measure.
- Returns:
The task that runs the sensitivity measure.
- Return type:
asyncio.Task
- abstract async static run(sample_list: List[Sample], model: ModelAPI, **kwargs) List[Sample] #
Abstract method that implements the sensitivity measure.
- Parameters:
sample_list (List[Sample]) – The input data to be transformed.
model (ModelAPI) – The model to be used for evaluation.
**kwargs – Additional arguments to be passed to the sensitivity measure.
- Returns:
The transformed data based on the implemented sensitivity measure.
- Return type:
List[Sample]
- static transform(sample_list: List[Sample], starting_context: List[str] | None = None, ending_context: List[str] | None = None, strategy: str | None = None) List[Sample] #
Transform the input list of samples to add toxicity-related text.
- Parameters:
sample_list (List[Sample]) – A list of samples to transform.
starting_context (Optional[List[str]]) – A list of starting context tokens.
ending_context (Optional[List[str]]) – A list of ending context tokens.
strategy (str) – The strategy for adding context. Can be ‘start’, ‘end’, or ‘combined’.
- Returns:
The transformed list of samples.
- Return type:
List[Sample]
- Raises:
ValueError – If an invalid context strategy is provided.