llm-sandbox and MPLSandbox
These are ecosystem siblings: both provide sandboxed code execution environments for LLMs, but llm-sandbox focuses on Python-only runtime interpretation while MPLSandbox provides multi-language compilation and static analysis feedback, serving different execution paradigms within the same observability category.
About llm-sandbox
vndee/llm-sandbox
Lightweight and portable LLM sandbox runtime (code interpreter) Python library.
This is a developer tool designed for securely executing code generated by Large Language Models (LLMs). It takes in LLM-generated code in various programming languages like Python, JavaScript, Java, C++, Go, or R and runs it in an isolated, controlled environment. The output is the result of the code execution, including any plots or visualizations, without risk to the main system. Developers building AI applications or integrating LLMs into their workflows would use this to ensure safety.
About MPLSandbox
Ablustrund/MPLSandbox
MPLSandbox is an out-of-the-box multi-programming language sandbox designed to provide unified and comprehensive feedback from compiler and analysis tools for LLMs.
This tool helps researchers working with Large Language Models (LLMs) to automatically analyze code written in multiple programming languages. You provide code and unit tests, and it delivers unified feedback from compilers and various code analysis tools. Researchers can then use this detailed information to improve the performance of their LLMs on coding tasks.
Scores updated daily from GitHub, PyPI, and npm data. How scores work