langtest.utils.report_utils.mlflow_report#
- mlflow_report(experiment_name: str, task: str | TaskManager, df_report: DataFrame, multi_model_comparison: bool = False)#
Logs metrics and details from a given report to an MLflow experiment.
This function uses MLflow to record various metrics (e.g., pass rate, pass status) from a given report into a specified experiment. If the experiment does not already exist, it’s created. If it does exist, metrics are logged under a new run.
Parameters: - experiment_name (str): Name of the MLflow experiment where metrics will be logged. - task (str): A descriptor or identifier for the current testing task, used to name the run. - df_report (pd.DataFrame): DataFrame containing the report details. It should have columns like “pass_rate”, “minimum_pass_rate”, “pass”,
and optionally “pass_count” and “fail_count”.
- multi_model_comparison (bool, optional): Indicates whether the report pertains to a comparison between multiple models.
If True, certain metrics like “pass_count” and “fail_count” are not logged. Default is False.