Skip to content

Releases: chakki-works/seqeval

v1.2.2

23 Oct 23:48
354c576
Compare
Choose a tag to compare

Update setup.py to relax version pinning, fix #65

v1.2.1

17 Oct 00:36
29a0e1e
Compare
Choose a tag to compare

In strict mode, speed up the evaluation.
About 13 times faster.

Fixes #62

v1.2.0

15 Oct 22:26
c9db419
Compare
Choose a tag to compare

Enable to compute macro/weighted/perClass f1, recall, and precision #61

F1 score

>>> from seqeval.metrics import f1_score
>>> y_true = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'O', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> f1_score(y_true, y_pred, average=None)
array([0.5, 1. ])
>>> f1_score(y_true, y_pred, average='micro')
0.6666666666666666
>>> f1_score(y_true, y_pred, average='macro')
0.75
>>> f1_score(y_true, y_pred, average='weighted')
0.6666666666666666

Precision

>>> from seqeval.metrics import precision_score
>>> y_true = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'O', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> precision_score(y_true, y_pred, average=None)
array([0.5, 1. ])
>>> precision_score(y_true, y_pred, average='micro')
0.6666666666666666
>>> precision_score(y_true, y_pred, average='macro')
0.75
>>> precision_score(y_true, y_pred, average='weighted')
0.6666666666666666

Recall

>>> from seqeval.metrics import recall_score
>>> y_true = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'O', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> recall_score(y_true, y_pred, average=None)
array([0.5, 1. ])
>>> recall_score(y_true, y_pred, average='micro')
0.6666666666666666
>>> recall_score(y_true, y_pred, average='macro')
0.75
>>> recall_score(y_true, y_pred, average='weighted')
0.6666666666666666

v1.1.1

13 Oct 23:40
8dd9f67
Compare
Choose a tag to compare

Add length check to classification_report v1 #59

v1.1.0

12 Oct 04:32
Compare
Choose a tag to compare

Add BILOU as a scheme #56

v1.0.0

11 Oct 05:00
91215f5
Compare
Choose a tag to compare

In some cases, the behavior of the current classification_report is not enough. In the new classification_report, we can specify the evaluation scheme explicitly. This resolved the following issues:

Fix #23
Fix #25
Fix #35
Fix #36
Fix #39

v0.0.19

07 Oct 01:48
a48a9d1
Compare
Choose a tag to compare

classification_report outputs string/dict as requested in issue #41 #51

v0.0.18

03 Oct 12:49
26b3bab
Compare
Choose a tag to compare

Stop raising exception when get_entities takes a non-NE input #50

v0.0.17

30 Sep 09:12
ca1ad9f
Compare
Choose a tag to compare

Update validation to fix #46 #47

v0.0.16

30 Sep 06:27
679a7c5
Compare
Choose a tag to compare

Fix for classification report when tag contain dashes in their names or no tag #38