langtest.augmentation.debias.DebiasTextProcessing#

class DebiasTextProcessing(model: str, hub: str, system_prompt: str = '\nThe provided text may contains biased information, including discrimination and stereotyping, categorized as follows:\n\n### Bias Detection Categories and Subcategories\nCategory: Sub-category1, Sub-category2, Sub-category3, ...\n\n- **Demographic Bias**: Gender-specific, racial, ethnic, religious, age-related.\n- **Social Bias**: Socio-economic, educational, occupational, geographical.\n- **Historical Bias**: Cultural, traditional, colonial.\n- **Confirmation Bias**: Selective observation, cherry-picking evidence.\n- **Evaluation Bias**: Subjective judgment, misrepresentation in assessment.\n- **Aggregation Bias**: Overgeneralization, stereotyping, data grouping errors.\n- **Algorithmic Bias**: Model design, optimization, unfair weighting.\n- **Data Bias**: Imbalanced datasets, exclusion of minorities, labeling errors.\n- **Automation Bias**: Overreliance on automated outputs, neglecting human oversight.\n\n### Unbiased Categories and Subcategories\nCategory: subcategory1, subcategory2, subcategory3, ...\n- **Unbiased**: fair, equitable, no bias detected.\n\n### Task Instructions\n1. **Identify Bias**: Determine the category and subcategories of bias in the text if no bias found in text follow the unbiased categories and subcategories.\n2. **Rate Bias**: Provide a rating based on the severity level (Low to Critical).\n3. **De-bias the Text**: Provide a step-by-step rephrased version of the text, justifying each adjustment.\n\n', model_kwargs: Dict | None = None)#

Bases: object

__init__(model: str, hub: str, system_prompt: str = '\nThe provided text may contains biased information, including discrimination and stereotyping, categorized as follows:\n\n### Bias Detection Categories and Subcategories\nCategory: Sub-category1, Sub-category2, Sub-category3, ...\n\n- **Demographic Bias**: Gender-specific, racial, ethnic, religious, age-related.\n- **Social Bias**: Socio-economic, educational, occupational, geographical.\n- **Historical Bias**: Cultural, traditional, colonial.\n- **Confirmation Bias**: Selective observation, cherry-picking evidence.\n- **Evaluation Bias**: Subjective judgment, misrepresentation in assessment.\n- **Aggregation Bias**: Overgeneralization, stereotyping, data grouping errors.\n- **Algorithmic Bias**: Model design, optimization, unfair weighting.\n- **Data Bias**: Imbalanced datasets, exclusion of minorities, labeling errors.\n- **Automation Bias**: Overreliance on automated outputs, neglecting human oversight.\n\n### Unbiased Categories and Subcategories\nCategory: subcategory1, subcategory2, subcategory3, ...\n- **Unbiased**: fair, equitable, no bias detected.\n\n### Task Instructions\n1. **Identify Bias**: Determine the category and subcategories of bias in the text if no bias found in text follow the unbiased categories and subcategories.\n2. **Rate Bias**: Provide a rating based on the severity level (Low to Critical).\n3. **De-bias the Text**: Provide a step-by-step rephrased version of the text, justifying each adjustment.\n\n', model_kwargs: Dict | None = None)#

Methods

__init__(model, hub[, system_prompt, ...])

apply_bias_correction([bias_tolerance_level])

Apply bias correction to the dataset.

apply_debiasing([level])

debias_text(text, category, sub_category, ...)

detect_bias(text)

enhance_text(text[, bias_tolerance_level])

Enhance the text by debiasing it.

get_ollama(text, system_prompt, output_schema)

get_openai(text, system_prompt, ...)

identify_bias()

initialize(input_dataset, text_column[, ...])

interaction_llm(text, output_schema, ...)

load_data(source, source_type)

apply_bias_correction(bias_tolerance_level: int = 2)#

Apply bias correction to the dataset.

enhance_text(text: str, bias_tolerance_level: Literal[1, 2, 3, 4, 5] = 2) str#

Enhance the text by debiasing it.