FFFF
Skip to content

fix: skip API key requirement when only non-LLM tools are enabled#409

Open
vedant-hawcx wants to merge 3 commits intoBeehiveInnovations:mainfrom
vedant-hawcx:fix/skip-api-key-for-non-llm-tools
Open

fix: skip API key requirement when only non-LLM tools are enabled#409
vedant-hawcx wants to merge 3 commits intoBeehiveInnovations:mainfrom
vedant-hawcx:fix/skip-api-key-for-non-llm-tools

Conversation

@vedant-hawcx
Copy link
Copy Markdown
@vedant-hawcx vedant-hawcx commented Mar 5, 2026

Summary

  • When all LLM-requiring tools are disabled via DISABLED_TOOLS, the server now starts without requiring any API keys
  • Non-LLM tools (clink, version, listmodels, challenge, apilookup) work independently of provider configuration
  • Auto-mode validation is also skipped when no LLM tools are active
  • Addresses Issue Use CLI for chats instead of API #402

Problem

If you only need clink to bridge requests to external CLI programs, you still had to configure a valid API key or the server would crash on startup with:

ValueError: At least one API configuration is required.

This forced users to obtain and configure API keys they don't need.

Changes

  • server.py: Made the provider requirement check conditional — only enforced when LLM tools are in the active tool set
  • server.py: Made the auto-mode model availability check conditional on LLM tools being active
  • Added warning log when no providers are configured (instead of crashing)

Test plan

  • Verified server starts with DISABLED_TOOLS=analyze,refactor,testgen,secaudit,docgen,tracer,chat,thinkdeep,planner,consensus,codereview,precommit,debug and placeholder API keys
  • Server logs WARNING - No API providers configured. Only non-LLM tools (clink, version, etc.) will be available.
  • Server reaches Server ready - waiting for tool requests...
  • Existing behavior unchanged when LLM tools are enabled with valid keys

Tools like `clink`, `version`, `listmodels`, `challenge`, and `apilookup`
don't require any LLM provider. When all LLM-requiring tools are disabled
via DISABLED_TOOLS, the server should start without requiring API keys.

Previously, the server would crash with "At least one API configuration
is required" even when only clink was needed. This change makes the
provider and auto-mode validation conditional on whether any LLM tools
are actually active.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the server's startup flexibility by making API key requirements conditional. Previously, the server would crash if no API keys were configured, even when only non-LLM tools were intended for use. The changes ensure that the server can start and operate non-LLM tools without requiring any API provider setup, improving user experience for specific use cases and providing clearer feedback through warnings instead of errors.

Highlights

  • API Key Requirement: The server no longer requires API keys to start if only non-LLM tools are enabled via DISABLED_TOOLS.
  • Non-LLM Tool Functionality: Non-LLM tools such as clink, version, listmodels, challenge, and apilookup now operate independently of any provider configuration.
  • Auto-Mode Validation: Auto-mode validation is now skipped when no LLM-dependent tools are active, preventing unnecessary checks.
  • Error Handling: A warning log is now issued when no API providers are configured, replacing the previous ValueError crash.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • server.py
    • Modified the custom_provider_factory function to conditionally enforce API provider requirements only when LLM-dependent tools are active.
    • Added a warning log message in custom_provider_factory when no API providers are configured, allowing the server to start for non-LLM tools.
    • Updated the cleanup_providers function to make auto-mode model availability checks conditional on LLM tools being active.
    • Introduced an informational log message in cleanup_providers when auto-mode is skipped due to no active LLM tools.
Activity
  • Verified that the server starts successfully when all LLM-requiring tools are disabled and placeholder API keys are used.
  • Confirmed that the server logs the expected warning message when no API providers are configured.
  • Ensured the server reaches the "Server ready" state under these conditions.
  • Identified a remaining task to verify that existing behavior remains unchanged when LLM tools are enabled with valid keys.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor
@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly modifies the server startup logic to allow running without any LLM API keys when only non-LLM tools are enabled. The changes conditionally enforce provider configuration and auto-mode validation based on whether any LLM-requiring tools are active. The implementation is sound. I have one suggestion to improve code organization by moving a constant to the module level.

Copy link
Copy Markdown
@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a032e61547

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

vedant-hawcx and others added 2 commits March 5, 2026 11:32
…hardcoded set

Replace the hardcoded NON_LLM_TOOLS allowlist with a dynamic check using
each tool's requires_model() method. This correctly classifies tools like
planner, docgen, tracer, and consensus as non-LLM tools, since they
declare requires_model=False.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

0