mlx-vlm and Local_LLM_Training_Apple_Silicon
About mlx-vlm
Blaizzy/mlx-vlm
MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.
This project helps you understand images, audio, and video content by describing or answering questions about them. You provide a visual, audio, or multi-modal input and a question or prompt, and the tool generates a textual response. It's designed for anyone working with multimedia content on a Mac who needs to extract information or generate descriptions.
About Local_LLM_Training_Apple_Silicon
GusLovesMath/Local_LLM_Training_Apple_Silicon
Created and enhanced a local LLM training system on Apple Silicon with MLX and Metal API, overcoming the absence of CUDA support. Fine-tuned the Llama3 model on 16 GPUs for streamlined solution of verbose math word problems. Result: a powerful, privacy-preserving chatbot that runs smoothly on-device.
This project offers a specialized chatbot designed to solve complex math word problems. You provide a detailed math problem in plain English, and the chatbot delivers a clear, concise solution. It's ideal for students, educators, or anyone needing quick, private assistance with verbose mathematical reasoning, running directly on your Apple device.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work