langtest.modelhandler.lmstudio_modelhandler.PretrainedModelForSensitivity#

class PretrainedModelForSensitivity(model: Any, *args, **kwargs)#

Bases: PretrainedModel, ModelAPI

__init__(model: Any, *args, **kwargs)#

Initialize the PretrainedModelForSensitivity.

Parameters:
  • model (Any) – The pretrained model to be used.

  • *args – Additional positional arguments.

  • **kwargs – Additional keyword arguments.

Methods

__init__(model, *args, **kwargs)

Initialize the PretrainedModelForSensitivity.

load_model(path, *args, **kwargs)

Load the pretrained model.

predict(text, prompt, server_prompt, *args, ...)

Perform prediction using the pretrained model.

predict_raw(text, prompt, server_prompt, ...)

Predicts the output for the given input text without caching.

Attributes

model_registry

classmethod load_model(path: str, *args, **kwargs) Any#

Load the pretrained model.

Parameters:
  • path (str) – The path to the pretrained model.

  • *args – Additional positional arguments.

  • **kwargs – Additional keyword arguments.

Returns:

The loaded pretrained model.

Return type:

Any

predict(text: str | dict, prompt: dict, server_prompt: str, *args, **kwargs)#

Perform prediction using the pretrained model.

Parameters:
  • text (Union[str, dict]) – The input text or dictionary.

  • server_prompt (str) – The server prompt for the chat.

  • *args – Additional positional arguments.

  • **kwargs – Additional keyword arguments.

Returns:

A dictionary containing the prediction result.
  • ’result’: The prediction result.

Return type:

dict

predict_raw(text: str | dict, prompt: dict, server_prompt: str, *args, **kwargs)#

Predicts the output for the given input text without caching.

Parameters:
  • text (Union[str, dict]) – The input text or dictionary.

  • prompt (dict) – The prompt for the prediction.

  • server_prompt (str) – The server prompt for the chat.

  • *args – Additional positional arguments.

  • **kwargs – Additional keyword arguments.

Returns:

The predicted output.

Return type:

str