langtest.metrics.llm_eval#

Classes

EvalTemplate()

The EvalTemplate class provides a method to build a prompt for evaluating student answers based on a given rubric.

LlmEval(llm[, template, input_variables, ...])

llm_eval for evaluating question answering.

RatingEval(llm[, eval_prompt, ...])

RatingEval for evaluating responses with customizable rating prompts.

SummaryEval(llm[, template, input_variables])

SummaryEval for evaluating clinical summary generation from doctor-patient dialogues.