orionw/FollowIR
FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions
This project helps evaluate and improve how well information retrieval models follow specific instructions when searching for documents. It takes a query and additional instructions as input, then measures how accurately the model ranks relevant documents. The end-users are primarily developers and researchers working on search technologies and large language models, aiming to make these systems more responsive to complex user directives.
No commits in the last 6 months.
Use this if you are developing or evaluating information retrieval models and need to assess their ability to understand and execute nuanced search instructions.
Not ideal if you are an end-user simply looking for a search engine to use, rather than a tool for evaluating and training search models.
Stars
52
Forks
—
Language
Python
License
—
Category
Last pushed
Jul 03, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/orionw/FollowIR"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MantisAI/sieves
Plug-and-play document AI with zero-shot models.
xiaoya-li/Instruction-Tuning-Survey
Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`
TencentARC-QQ/TagGPT
TagGPT: Large Language Models are Zero-shot Multimodal Taggers
rafaelpierre/bullet
bullet: A Zero-Shot / Few-Shot Learning, LLM Based, text classification framework
amazon-science/adaptive-in-context-learning
AdaICL: Which Examples to Annotate of In-Context Learning? Towards Effective and Efficient Selection