llm-foundry and rllm
One tool provides code for training large language models while the other is a library for relational table learning using LLMs, making them complements since the latter can leverage models trained by the former for structured data tasks.
About llm-foundry
mosaicml/llm-foundry
LLM training code for Databricks foundation models
Implements end-to-end training, finetuning, evaluation, and inference pipelines with built-in support for efficiency techniques like Flash Attention and Mixture-of-Experts architectures. Integrates with Composer for distributed training optimization and MosaicML's platform for scalable workload orchestration, while supporting both HuggingFace and proprietary models (MPT, DBRX) from 125M to 132B parameters. Includes data preparation utilities for StreamingDataset format, inference export to ONNX/HuggingFace, and in-context learning evaluation on academic benchmarks.
About rllm
rllm-team/rllm
Pytorch Library for Relational Table Learning with LLMs.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work