langtest.utils.custom_types.sample.SensitivitySample#

class SensitivitySample(*, original: str, options: str, test_case: str = None, state: str = None, dataset_name: str = None, task: str = 'sensitivity', category: str = None, test_type: str = None, expected_result: Result = None, actual_result: Result = None, loss_diff: float = None, ran_pass: bool = None, config: str = None, hub: str = None, op1: Dict = None, op2: Dict = None)#

Bases: BaseModel

A class representing a sample for sensitivity task.

original#

The original text input.

Type:

str

options#

The options for the input.

Type:

str

test_case#

The transformed text input for testing.

Type:

str

state#

The state of the sample.

Type:

str

dataset_name#

The name of the dataset the sample belongs to.

Type:

str

task#

The type of task, default is “sensitivity”.

Type:

str

category#

The category or module name associated with the sample.

Type:

str

test_type#

The type of test being performed.

Type:

str

expected_result#

The expected result of the sensitivity test.

Type:

Result

actual_result#

The actual result obtained from the sensitivity test.

Type:

Result

loss_diff#

The difference in loss between expected and actual results.

Type:

float

ran_pass#

Flag indicating if the sensitivity test passed.

Type:

bool

config#

Configuration information.

Type:

str

hub#

Hub information.

Type:

str

op1#

Output dictionary for the original text.

Type:

Dict

op2#

Output dictionary for the transformed text.

Type:

Dict

to_dict(self) Dict[str, Any]#

Convert the SensitivitySample instance to a dictionary.

is_pass(self) bool#

Check if the sensitivity test passes based on loss difference threshold.

run(self, model, **kwargs) bool#

Run the sensitivity test using the provided model.

transform(self, func

Callable, params: Dict, **kwargs): Transform the original text using a specified function.

__init__(**data)#

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Methods

__init__(**data)

Create a new model by parsing and validating input data from keyword arguments.

build_input(text, options)

Builds input data for the model.

build_prompt(input_data, **kwargs)

Builds a prompt for the model.

construct([_fields_set])

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.

copy(*[, include, exclude, update, deep])

Duplicate a model, optionally choose which fields to include, exclude and change.

dict(*[, include, exclude, by_alias, ...])

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

from_orm(obj)

is_pass()

Check if the sensitivity test passes based on loss difference threshold.

json(*[, include, exclude, by_alias, ...])

Generate a JSON representation of the model, include and exclude arguments as per dict().

parse_file(path, *[, content_type, ...])

parse_obj(obj)

parse_raw(b, *[, content_type, encoding, ...])

run(model, **kwargs)

Run the sensitivity test using the provided model.

schema([by_alias, ref_template])

schema_json(*[, by_alias, ref_template])

to_dict()

Convert the SensitivitySample instance to a dictionary.

transform(func, params, **kwargs)

Transform the original text using a specified function.

update_forward_refs(**localns)

Try to update ForwardRefs on fields based on this Model, globalns and localns.

validate(value)

Attributes

original

options

test_case

state

dataset_name

task

category

test_type

expected_result

actual_result

loss_diff

ran_pass

config

hub

op1

op2

build_input(text: str, options: str) dict#

Builds input data for the model.

Parameters:
  • text (str) – Main text or context.

  • options (str) – Options for the input. Ignored for ‘add_toxic_words’ test type.

Returns:

Input data. Structure depends on the test type.

For ‘add_toxic_words’ test, includes only the main text; for other types, includes a question and optional answer options.

Return type:

dict

build_prompt(input_data: dict, **kwargs) dict#

Builds a prompt for the model.

Parameters:
  • input_data (dict) – Input data.

  • **kwargs – Additional arguments. user_prompt (str, optional): User-defined template.

Returns:

Prompt information.

Return type:

dict

classmethod construct(_fields_set: SetStr | None = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: AbstractSetIntStr | MappingIntStrAny | None = None, exclude: AbstractSetIntStr | MappingIntStrAny | None = None, update: DictStrAny | None = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters:
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns:

new model instance

dict(*, include: AbstractSetIntStr | MappingIntStrAny | None = None, exclude: AbstractSetIntStr | MappingIntStrAny | None = None, by_alias: bool = False, skip_defaults: bool | None = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny#

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

is_pass()#

Check if the sensitivity test passes based on loss difference threshold.

Returns:

True if the test passes, False otherwise.

Return type:

bool

json(*, include: AbstractSetIntStr | MappingIntStrAny | None = None, exclude: AbstractSetIntStr | MappingIntStrAny | None = None, by_alias: bool = False, skip_defaults: bool | None = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Callable[[Any], Any] | None = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

run(model, **kwargs)#

Run the sensitivity test using the provided model.

Parameters:
  • model – The model used for sensitivity testing.

  • **kwargs – Additional keyword arguments for the model.

Returns:

True if the test was successful, False otherwise.

Return type:

bool

to_dict() Dict[str, Any]#

Convert the SensitivitySample instance to a dictionary.

Returns:

A dictionary representation of the sample.

Return type:

Dict[str, Any]

transform(func: Callable, params: Dict, **kwargs)#

Transform the original text using a specified function.

Parameters:
  • func (Callable) – The transformation function.

  • params (Dict) – Parameters for the transformation function.

  • **kwargs – Additional keyword arguments for the transformation.

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.