Legal Notebook

 

Overview

The Legal Benchmark Test evaluates a model’s legal reasoning and comprehension, specifically gauging its ability to assess support strength in case summaries. In this tutorial, using the LegalSupport dataset, we scrutinize the model’s performance with text passages making legal claims and two corresponding case summaries. This concise exploration aims to reveal the model’s proficiency in navigating legal nuances and discerning support strength, offering valuable insights into its legal reasoning capabilities.

Open in Collab

Category Hub Task Dataset Used Open In Colab
Legal-Tests OpenAI Question-Answering Legal-Support Open In Colab

Config Used

tests:
  defaults:
    min_pass_rate: 1.0

  legal:
    legal-support:
      min_pass_rate: 0.70

Supported Tests

  • legal-support: It tests a model’s ability to reason regarding the strength of support a particular case summary provides.