nervaluate and NER-Evaluation
These are **competitors**—both implement identical SemEval'13-based full named-entity evaluation metrics (as opposed to token-level scoring), with nervaluate being the actively maintained choice given its substantial monthly download activity versus zero downloads for the alternative implementation.
Maintenance
13/25
Adoption
20/25
Maturity
25/25
Community
17/25
Maintenance
0/25
Adoption
10/25
Maturity
16/25
Community
22/25
Stars: 206
Forks: 27
Downloads: 30,907
Commits (30d): 0
Language: Python
License: MIT
Stars: 222
Forks: 48
Downloads: —
Commits (30d): 0
Language: Python
License: MIT
No Dependents
Stale 6m
No Package
No Dependents
About nervaluate
MantisAI/nervaluate
Full named-entity (i.e., not tag/token) evaluation metrics based on SemEval’13
About NER-Evaluation
davidsbatista/NER-Evaluation
An implementation of a full named-entity evaluation metrics based on SemEval'13 Task 9 - not at tag/token level but considering all the tokens that are part of the named-entity
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work