One command. Every doc. Smooshed. Pure bash CLI for RAG ingestion.
-
Updated
Mar 26, 2026 - Shell
8000
One command. Every doc. Smooshed. Pure bash CLI for RAG ingestion.
LLMs have amnesia. NeverForget is the cure. A universal, infinite-memory proxy for any LLM API.
Claude Code CLI용 범용 작업 파이프라인 서브에이전트
Breathing window memory system for LLM chatbots with GPT-5 Nano summarization. Efficient context management using sliding window algorithm.
Proactive context synthesis for AI coding agents. Build the right context before the first token.
Intelligently prepare your codebase as LLM context with dependency-aware file selection and token budgeting
High-performance, low-latency MCP server for local code context. Features AST-aware symbol graphs, semantic reranking, and PageRank-scored search for AI agents (Claude, Cursor, and more).
Protocol specifications for AI agent memory, architecture, and identity — AMP, AMPS, ATLAS, Breath Cycle, Identity System, Git Identity. By Curtis Mercier.
Git-history-aware codebase context generator for LLMs
A technical white paper on how LLMs handle memory, why context windows alone are not enough, and what production engineers need to know about memory architectures, security risks, and the road to in-weights personalisation.
Keep AI agents from forgetting. Keep context small. Hot layer + cold layer + auto decisions. Zero memory loss.
Turns your local codebase into a secure, token-optimized context prompt for LLMs like ChatGPT and Claude.
🏗️ AI-friendly Node.js project architecture standards. Keep files <400 lines for AI agents. Covers H5 games, data tools, APIs, SDKs. 70-93% token savings. OpenClaw skill.
Benchmark any system that transforms LLM context: compressors, RAG rerankers, memory managers, and more.
A Controlled Natural Language (CNL) for AI designed to "minify" language and make AI context denser.
Token Killer for Pi — reduce LLM token consumption by 60-90% on common dev commands
Gitra is a self-hosted web app to explore, analyze, and "chat" with any public GitHub repository. It bundles codebases into LLM-optimized context for instant AI analysis using Google Gemini.
Hierarchical Attention Tree: 100% recall at 70x faster build times than HNSW. A new database paradigm for AI memory and hierarchical semantic search.
Prepare your code for AI. Concatenates your project into a clean, markdown-formatted file with file trees and granular filtering options.
A proposed standard for the .agents/ directory to prevent context bloat and improve agent reasoning in complex codebases.
Add a description, image, and links to the context-window topic page so that developers can more easily learn about it.
To associate your repository with the context-window topic, visit your repo's landing page and select "manage topics."