Confusion Matrix Scripting
In addition to exploring model failures in the UI, you can also script directly against the entries that make up the confusion matrix.
Full python client docs are available here, and a brief example follows.
1
PROJECT = 'YOUR_PROJECT'
2
INF_ID = 'experiment_1'
3
DATASET_ID = 'dataset_name'
4
​
5
al_client = al.Client()
6
al_client.set_credentials(api_key="YOUR_API_KEY")
7
​
8
metrics_manager = al_client.get_metrics_manager(PROJECT)
9
​
10
# Specify your query here. The union of queries will be returned.
11
# See the python client docs for the exhaustive list
12
queries = [
13
# All Confusions
14
metrics_manager.make_confusions_query(),
15
16
# A specific Query
17
metrics_manager.make_cell_query('gt_class', 'inf_class'),
18
19
# A full row / column
20
metrics_manager.make_confused_as_query('inf_class')
21
]
22
​
23
confusions_opts = {
24
'confidence_threshold': 0.5,
25
'iou_threshold': 0.5,
26
'queries': queries,
27
'ordering': metrics_manager.ORDER_CONF_DESC
28
}
29
confusions = metrics_manager.fetch_confusions(DATASET_ID, INF_ID, confusions_opts) # type: ignore
30
​
31
print('num_results: ', len(confusions['rows']))
32
print(confusions['rows'][0])
33
​
Copied!
​
Last modified 1mo ago
Copy link