A generalized information-seeking agent system with Large Language Models (LLMs).
-
Updated
Jun 19, 2024 - Python
FFFF
A generalized information-seeking agent system with Large Language Models (LLMs).
[ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
[NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
The PyVisionAI Official Repo
Fenix Ai Trading Bot with LangGraph and ollama and multipe providers
A Cli, a webUI, and a MCP server for the Z-Image-Turbo text-to-image generation model (Tongyi-MAI/Z-Image-Turbo base model as well as quantized models)
MVP of an idea using multiple local LLM models to simulate and play D&D
GPU-accelerated LLaMA inference wrapper for legacy Vulkan-capable systems a Pythonic way to run AI with knowledge (Ilm) on fire (Vulkan).
BMO - Local Ai companion
A local chatbot for managing docs
Open WebUI Tools for Google Services: Manage contacts, send emails, and schedule Google Meet meetings directly from your AI assistant using Google APIs.
Demo project showcasing Gemma3 function calling capabilities using Ollama. Enables automatic web searches via Serper.dev for up-to-date information and features an interactive Gradio chat interface.
An automated pipeline to convert e-books into beautifully illustrated scenes using local AI backends ComfyUI and SD Forge.
Add a description, image, and links to the localllm topic page so that developers can more easily learn about it.
To associate your repository with the localllm topic, visit your repo's landing page and select "manage topics."