Multi-tenant fine-tuning for local LLMs with Tinker-compatible API
-
Updated
Mar 24, 2026 - Python
8000
Multi-tenant fine-tuning for local LLMs with Tinker-compatible API
AI agent with multi-agent orchestration, autonomous cognitive systems, and a full management dashboard
🚀 Unified NLP Pipelines for Language Models
A lightweight, self-contained Python project for running local LLM personalities with minimal dependencies. This system uses TinyLlama-1.1B-Chat-v1.0.0 and llama-cpp-python for inference, and Rich for a user-friendly console chat interface. This is a expansion of Tiny-Local-llm which allows you to select from 1 of 3 basic personalities.
A lightweight CLI to orchestrate Gemini and GPT using your local files as a shared blackboard.
J.A.R.V.I.S: An AI-powered Open Source Intelligence (OSINT) system. It orchestrates deep web scraping and local LLMs to autonomously generate comprehensive intelligence dossiers.
An entirely offline, privacy-centric voice assistant that leverages lightweight local AI for speech-to-text (Vosk), large language model processing (GGUF via Llama.cpp), and text-to-speech (Kokoro), offering seamless, low-latency, and secure voice interactions directly from your machine.
Local-first proactive finance agent combining deterministic financial analytics with grounded LLM chat that runs fully on your machine with Ollama, PostgreSQL + pgvector, and Streamlit.
Fully local autonomous AI research agent using Ollama with tool-based web search and reasoning.
This repository will be used to add all different types of LLMs projects - basic to advanced
Add a description, image, and links to the local-llms topic page so that developers can more easily learn about it.
To associate your repository with the local-llms topic, visit your repo's landing page and select "manage topics."