You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Clone the repository
git clone https://github.com/dollspace-gay/openclaudia.git
cd openclaudia
# Build release version (includes browser/web search support by default)
cargo build --release
# Build without browser feature (lighter binary, no headless Chrome)
cargo build --release --no-default-features
# The binary is at target/release/openclaudia
Quick Start
# Set your API key (choose your provider)export ANTHROPIC_API_KEY="your-key-here"# or: export OPENAI_API_KEY="your-key-here"# or: export GOOGLE_API_KEY="your-key-here"# or: export DEEPSEEK_API_KEY="your-key-here"# Initialize configuration in your project
openclaudia init
# Start chatting (uses default provider from config)
openclaudia
# Use a specific model (provider auto-detected from model name)
openclaudia -m gemini-2.5-flash
openclaudia -m gpt-4o
openclaudia -m claude-sonnet-4-20250514
Configuration
Environment Variables
Variable
Provider
Required
ANTHROPIC_API_KEY
Anthropic (Claude)
For Anthropic
OPENAI_API_KEY
OpenAI (GPT)
For OpenAI
GOOGLE_API_KEY
Google (Gemini)
For Google
DEEPSEEK_API_KEY
DeepSeek
For DeepSeek
QWEN_API_KEY
Qwen/Alibaba
For Qwen
ZAI_API_KEY
Z.AI (GLM)
For Z.AI
TAVILY_API_KEY
Web search
Optional
BRAVE_API_KEY
Web search (alt)
Optional
Config File
Configuration is stored in .openclaudia/config.yaml:
proxy:
port: 8080host: "127.0.0.1"target: anthropic # Provider: anthropic, openai, google, deepseek, qwen, zai, ollama, localproviders:
anthropic:
base_url: https://api.anthropic.comopenai:
base_url: https://api.openai.comgoogle:
base_url: https://generativelanguage.googleapis.comdeepseek:
base_url: https://api.deepseek.com# Ollama for local LLM inferenceollama:
base_url: http://localhost:11434# Any OpenAI-compatible local server (LM Studio, LocalAI, etc.)local:
base_url: http://localhost:1234/v1# Thinking/reasoning mode configurationthinking:
enabled: falsebudget_tokens: 10000# Anthropic, Google Gemini 2.5reasoning_effort: "medium"# OpenAI o1/o3: low, medium, highsession:
timeout_minutes: 30persist_path: .openclaudia/sessionmax_turns: 25# 0 = unlimited agentic loop iterations# Verification-Driven Development (VDD) - Adversarial code review# vdd:# enabled: true# mode: advisory # advisory (single pass) or blocking (loop until clean)# adversary:# provider: google # Must differ from proxy.target# model: gemini-2.5-flash# Granular tool permissions# permissions:# denied_tools: ["bash"]# denied_commands: ["rm -rf /"]# Customize keybindingskeybindings:
ctrl-x n: new_sessionctrl-x x: exporttab: toggle_modeescape: cancel
CLI Commands
openclaudia # Start interactive chat (default)
openclaudia -m <model># Use specific model (auto-detects provider)
openclaudia -v # Verbose logging
openclaudia --resume # Resume last session
openclaudia --session-id <id># Resume specific session
openclaudia --coordinator # Multi-agent coordinator mode
openclaudia --tui-mode # Full-screen TUI (experimental)
openclaudia init # Initialize config in current directory
openclaudia init --force # Overwrite existing config
openclaudia auth # Authenticate with Claude Max (OAuth)
openclaudia auth --status # Check auth status
openclaudia auth --logout # Clear stored credentials
openclaudia start # Start as proxy server
openclaudia start -p 9090 # Custom port
openclaudia start -t openai # Target specific provider
openclaudia acp # Start ACP server on stdin/stdout
openclaudia acp -m <model># ACP with specific model
openclaudia loop # Start iteration mode with Stop hooks
openclaudia loop -m 10 # Max 10 iterations
openclaudia config # Show current configuration
openclaudia doctor # Check connectivity and API keys
Slash Commands (In Chat)
Navigation & Sessions
Command
Description
/help, /?
Show help message
/new, /clear
Start new conversation
/sessions, /list
List saved sessions
/continue <n>, /load <n>, /resume <n>
Load session by number
/export
Export conversation to markdown
/history
Show all messages in current session
/undo
Undo last message exchange
/redo
Redo last undone exchange
/exit, /quit, /q
Exit the chat
Model & Configuration
Command
Description
/model
Show current model
/models
List available models
/model <name>
Switch to d
8000
ifferent model mid-session
/config
Show current configuration
/config path
Show config file locations
/connect, /auth
Configure API keys
/login
Check authentication status
/effort [low|medium|high]
Set effort level
/mode
Toggle Build/Plan mode
/vim
Toggle vim mode
Session Info
Command
Description
/status, /info
Show session status
/rename <title>
Rename current session
/cost
Show session cost estimate
/context
Show context window usage breakdown
/compact, /summarize
Summarize old messages to save context
/version, /v, /about
Show version and system info
/debug
Show internal state (paths, env, config)
/doctor
Run inline diagnostics
Project & Development
Command
Description
/review
Review uncommitted git changes
/commit
Auto-commit with generated message
/commit-push-pr
Commit, push, and create PR
/find <query>, /f <query>
Fuzzy-find files in project
/init
Initialize project config
/editor, /edit, /e
Open external editor for long messages
/copy, /yank, /y
Copy last response to clipboard
Display
Command
Description
/theme [name]
List or switch color themes
/keys, /keybindings
Show keybindings
Plugins & Skills
Command
Description
/plugin
List installed plugins
/plugin install <name>
Install a plugin
/plugin enable/disable <name>
Enable or disable a plugin
/plugin marketplace list
List marketplace sources
/skill <name>
Load and invoke a skill
/<plugin>:<command>
Run a plugin command
Shell & Files
Command
Description
!<command>
Run shell command directly
@<file>
Attach file to prompt
Memory & Activity Commands
Command
Description
/memory
Show auto-learning statistics
/memory patterns
Show learned coding patterns
/memory errors <file>
Show known error patterns for a file
/memory prefs
Show learned user preferences
/memory files <file>
Show file co-edit relationships
/memory reset
Reset all learned data (with confirmation)
/activity
Show recent session activities
/activity files
Show recently modified files
/activity tools
Show recent tool usage
Keyboard Shortcuts
Shortcut
Action
Ctrl-X N
New session
Ctrl-X L
List sessions
Ctrl-X X
Export conversation
Ctrl-X Y
Copy last response
Ctrl-X E
Open external editor
Ctrl-X M
Show models
Ctrl-X S
Show status
Ctrl-X H
Show help
Tab
Toggle Build/Plan mode
Escape
Cancel current response
F2
Show models
Available Tools
Core Tools
Tool
Description
bash
Execute shell commands with optional timeout and background mode
bash_output
Get output from background shells or list all running shells
Any model installed — run ollama list to see available models
OpenAI-Compatible (Local)
Works with LM Studio, LocalAI, text-generation-webui, vLLM, and any OpenAI-compatible server
Set base_url to your local server (e.g., http://localhost:1234/v1)
Verification-Driven Development (VDD)
OpenClaudia includes a built-in adversarial code review system. When enabled, a separate AI model (the "adversary") reviews every response for bugs, security vulnerabilities, and logic errors.
vdd:
enabled: truemode: advisory # Single-pass review, findings injected as contextadversary:
provider: google # Use a different provider than your buildermodel: gemini-2.5-flashstatic_analysis:
auto_detect: true # Automatically runs cargo clippy, cargo test, etc.
Two modes:
Advisory — Single adversary pass after each response. Findings are displayed and injected into context for the next turn.
Blocking — Full adversarial loop. The builder must revise until the adversary's findings converge to false positives (confabulation threshold).
Findings include CWE classifications, severity levels (CRITICAL/HIGH/MEDIUM/LOW/INFO), and can automatically create Chainlink issues for tracking.
Hooks
Configure hooks in .openclaudia/config.yaml to run scripts at key moments:
pre_tool_use — Before executing a tool (with matcher for specific tools)
post_tool_use — After executing a tool
stop — For iteration/loop mode control
Auto-Learning Memory
OpenClaudia automatically learns from your coding sessions without any flags or model intervention. A SQLite database (.openclaudia/memory.db) captures knowledge from tool execution signals:
Coding Patterns — Conventions, pitfalls, and architecture observed from lint output and edit failures
Error Resolutions — Errors encountered and how they were fixed, matched automatically when subsequent commands succeed
File Relationships — Files frequently edited together (co-edit tracking), surfaced when you touch related code
User Preferences — Style and workflow preferences detected from corrections ("no, use tabs") and explicit statements ("always use snake_case")
Session Continuity — Recent session summaries and activity logs for context across restarts
Knowledge is injected into the model's context automatically — file-specific patterns when you read/edit a file, and preferences in every system prompt. Use /memory commands to inspect what's been learned.