llm-foundry and rllm

One tool provides code for training large language models while the other is a library for relational table learning using LLMs, making them complements since the latter can leverage models trained by the former for structured data tasks.

llm-foundry
71
Verified
rllm
54
Established
Maintenance 6/25
Adoption 18/25
Maturity 25/25
Community 22/25
Maintenance 13/25
Adoption 10/25
Maturity 16/25
Community 15/25
Stars: 4,397
Forks: 584
Downloads: 4,165
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 440
Forks: 34
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
No Package No Dependents

About llm-foundry

mosaicml/llm-foundry

LLM training code for Databricks foundation models

Implements end-to-end training, finetuning, evaluation, and inference pipelines with built-in support for efficiency techniques like Flash Attention and Mixture-of-Experts architectures. Integrates with Composer for distributed training optimization and MosaicML's platform for scalable workload orchestration, while supporting both HuggingFace and proprietary models (MPT, DBRX) from 125M to 132B parameters. Includes data preparation utilities for StreamingDataset format, inference export to ONNX/HuggingFace, and in-context learning evaluation on academic benchmarks.

About rllm

rllm-team/rllm

Pytorch Library for Relational Table Learning with LLMs.

Related comparisons

Scores updated daily from GitHub, PyPI, and npm data. How scores work