BERTopic and turftopic
These are competitors offering alternative approaches to neural topic modeling: BERTopic uses BERT embeddings with c-TF-IDF for interpretability at scale, while TurfTopic uses sentence-transformers with a focus on robustness and speed, so practitioners typically choose one based on their priorities around interpretability versus performance.
About BERTopic
MaartenGr/BERTopic
Leveraging BERT and c-TF-IDF to create easily interpretable topics.
Combines dense transformer embeddings with dimensionality reduction (UMAP) and clustering (HDBSCAN) to discover coherent topics, then applies class-based TF-IDF to extract semantically meaningful keywords per cluster. Supports diverse modeling paradigms including supervised, hierarchical, dynamic, multimodal, and zero-shot approaches, with optional LLM-based topic representation for natural language summaries. Integrates with 🤗 Hugging Face transformers and offers pluggable backends for embeddings (Flair, spaCy, Gensim) and vision models for cross-modal topic discovery.
About turftopic
x-tabdeveloping/turftopic
Robust and fast topic models with sentence-transformers.
Scores updated daily from GitHub, PyPI, and npm data. How scores work