What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?
-
Updated
Sep 21, 2025 - TypeScript
8000
What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?
The Context Optimization Layer for LLM Applications
[ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
Give Claude Code photographic memory in ONE portable file. No database, no SQLite, no ChromaDB - just a single .mv2 file you can git commit, scp, or share. Native Rust core with sub-ms operations.
Find the ghost tokens. Fix them. Survive compaction. Avoid context quality decay.
Make your OpenClaw AI agent faster, smarter, and cheaper. Speed optimization, memory architecture, context management, model selection, and one-shot development guide.
Open Source Context infrastructure for AI agents. Auto-capture and share your agents' context everywhere.
Supercharge AI Agents, Safely
Config-driven CLI tool that compresses command output before it reaches an LLM context
CLI proxy that reduces LLM token usage by 60-90%. Declarative YAML filters for Claude Code, Cursor, Copilot, Gemini. rtk alternative in Go.
A discovery and compression tool for your Python codebase. Creates a knowledge graph for a LLM context window, efficiently outlining your project | Code structure visualization | LLM Context Window Efficiency | Static analysis for AI | Large Language Model tooling #LLM #AI #Python #CodeAnalysis #ContextWindow #DeveloperTools
A local-first memory layer for AI (Cursor, Zed, Claude). Persistent architectural context via semantic search.
Local-first permanent, persistent memory for all agents and humans
Your AI(Claude Opus, Codex 5.4) sees 5% of your codebase and hallucinates the rest. Entroly fixes this — 95% fewer tokens, 100% code visibility. Works with Cursor, Claude Code, Copilot.
Inject relevant documentation into your prompts: 98% savings.
Transform and optimize your markdown documentation for Large Language Models (LLMs) and RAG systems. Generate llms.txt automatically.
Building Agents with LLM structured generation (BAML), MCP Tools, and 12-Factor Agents principles
Analyze your Claude Code context window, detect wasted tokens, and get pasteable fix commands. Zero API calls
Optimize AI workflows with Arachne. Automatically assembles the perfect code context (Tree, Target, Deps, Semantic) to fit context windows without noise. Built for efficiency and scale.
Grab, filter, and bundle your codebase for Claude and ChatGPT right from your terminal.
Add a description, image, and links to the context-window topic page so that developers can more easily learn about it.
To associate your repository with the context-window topic, visit your repo's landing page and select "manage topics."