MetaScreener and AIscreenR

MetaScreener is a standalone screening application while AIscreenR is an R package wrapper around screening APIs, making them complements that could be used together in a systematic review workflow (screening with MetaScreener, then post-processing results in R with AIscreenR).

MetaScreener
70
Verified
AIscreenR
46
Emerging
Maintenance 22/25
Adoption 10/25
Maturity 24/25
Community 14/25
Maintenance 10/25
Adoption 5/25
Maturity 16/25
Community 15/25
Stars: 1,304
Forks: 47
Downloads:
Commits (30d): 226
Language: Python
License: Apache-2.0
Stars: 14
Forks: 5
Downloads:
Commits (30d): 0
Language: R
License: GPL-3.0
No risk flags
No Package No Dependents

About MetaScreener

ChaokunHong/MetaScreener

AI-powered tool for efficient abstract and PDF screening in systematic reviews.

This tool helps researchers, academics, and systematic review specialists quickly screen through large numbers of research papers for systematic reviews. You upload search results from databases like PubMed or Scopus, along with your review criteria (PICO/PEO/SPIDER), and it provides include/exclude decisions for each paper's title and abstract, complete with confidence scores. High-confidence decisions are automated, while uncertain cases are flagged for human review, significantly speeding up the screening process.

systematic-review literature-review research-screening evidence-synthesis academic-research

About AIscreenR

MikkelVembye/AIscreenR

AI screening tools in R for systematic reviewing

This tool helps researchers, academics, and students conducting systematic literature reviews efficiently screen studies. It takes a list of research paper titles and abstracts (in RIS file format) along with your specific inclusion/exclusion criteria. Using AI, it helps identify relevant studies or significantly reduces the number of papers humans need to screen, outputting a clear inclusion/exclusion decision for each reference. This is ideal for anyone managing large volumes of academic literature.

systematic-review literature-review academic-research knowledge-synthesis evidence-based-practice

Related comparisons

Scores updated daily from GitHub, PyPI, and npm data. How scores work