Listing Test Cases
Listing the available test cases.
h.available_tests()
The generate()
method automatically generates test cases (based on the provided configuration).
Configuring Tests
The configuration for the tests can be passed in the form of a YAML file or using the configure()
method.
Using the YAML Configuration File
tests:
defaults:
min_pass_rate: 0.65
robustness:
lowercase:
min_pass_rate: 0.60
uppercase:
min_pass_rate: 0.60
from langtest import Harness
# Create test Harness with config file
h = Harness(task='text-classification', model={'model': 'path/to/local_saved_model', 'hub':'spacy'}, data={"data_source":'test.csv'}, config='config.yml')
Using the .configure()
Method
from langtest import Harness
# Create test Harness without config file
h = Harness(task='text-classification', model={'model': 'path/to/local_saved_model', 'hub':'spacy'}, data={"data_source":'test.csv'})
h.configure(
{
'tests': {
'defaults': {
'min_pass_rate': 0.65
},
'robustness': {
'lowercase': { 'min_pass_rate': 0.60 },
'uppercase': { 'min_pass_rate': 0.60 }
}
}
}
)
Generating Test Cases
Generating the test cases based on the configuration is as simple as calling the following method:
h.generate()
Viewing Test Cases
After generating test cases you can retrieve them by calling the following method:
h.testcases()
This method returns the produced test cases in form of a Pandas data frame – making them easy to edit, filter, import, or export. We can manually review the list of generated test cases, and decide on which ones to keep or edit.
A sample test cases dataframe looks like the one given below:
category | test_type | original | test_case | expected_result |
---|---|---|---|---|
robustness | lowercase | I live in Berlin | i live in berlin | berlin: LOC |
robustness | uppercase | I live in Berlin | I LIVE IN BERLIN | BERLIN: LOC |