The implement operation is a core feature of the Perpetual tool designed to automate code implementation based on user-provided instructions. This operation leverages Large Language Models (LLMs) to analyze your project, understand the context, and generate or modify code according to your specifications.
The implement operation works by identifying and processing sections of your code marked with ###IMPLEMENT### comments. These comments serve as indicators for where new code should be generated or existing code modified. The operation follows a multi-stage process to ensure accurate and contextually appropriate code implementation.
In addition to using ###IMPLEMENT### comments as indicators for code generation, a special task mode is available. Task mode allows you to provide implementation instructions directly—via standard input or from a file using the -i flag—thereby bypassing the search for ###IMPLEMENT### comments. This provides an alternative approach when you prefer to specify your requirements in a task description rather than annotate the code.
-
Project Analysis
The operation begins by analyzing your project structure and content. It utilizes the project index and annotations generated by theannotateoperation to understand the overall context of your codebase. -
Target File Identification
Files containing###IMPLEMENT###comments are identified as targets for code implementation. When using task mode,###IMPLEMENT###comments are ignored and target files are selected according to the task instructions and the project index. -
Context Gathering
The operation collects relevant information from the target files and related project files to provide comprehensive context to the LLM. -
Code Generation
Using the gathered context and the instructions provided in the###IMPLEMENT###comments or task instructions (if task mode is enabled), the LLM generates or modifies code for each target file. It may also modify related files or even create new files if necessary. -
Integration
The generated code is seamlessly integrated into your project, replacing the###IMPLEMENT###comments and/or modifying other existing code as specified.
Throughout this process, the implement operation relies heavily on the project index and file annotations to make informed decisions about code implementation, ensuring that the generated code is consistent with your project's structure, coding style, and existing functionality.
To effectively use the implement operation, follow this typical workflow:
-
Project Setup
- Create the basic structure of your project, including main files and directories.
- Initialize your project for use with the Perpetual tool by using the
initoperation. - Create a local
.envconfiguration file(s) at<project_root>/.perpetual/*.envand/or a global configuration file(s) at~/.config/Perpetual/*.envon Linux or<User profile dir>\AppData\Roaming\Perpetual\*.envon Windows. Settings from the local project configuration file take precedence over global configuration settings. Use example*.envfiles from<project_root>/.perpetual/*.env.exampleas reference.
-
Marking Implementation Points
-
In your source files, use
###IMPLEMENT###comments to indicate where you want code to be generated or modified. -
Example:
//###IMPLEMENT### //Create a function to check user input
-
As an alternative to using
###IMPLEMENT###comments, you can use Task mode to provide implementation instructions.
-
-
Running the Implement Operation
-
Execute the implement operation using the command:
Perpetual implement [flags]
-
The operation will process all files with
###IMPLEMENT###comments by default. -
Task Mode: Enable task mode with
-t. Instructions can be provided through standard input or by specifying a file with-i(which automatically enables task mode). This bypasses the usual search for implementation comments and automatically enables planning mode.
-
-
Reviewing and Iterating
-
Review the generated code for accuracy and consistency.
-
If necessary, use the
stashoperation to revert changes:Perpetual stash -r
-
Modify your
###IMPLEMENT###comments or task instructions to provide more specific guidance if needed. -
Re-run the
implementoperation to generate new code based on updated instructions.
-
-
Finalizing
- Once satisfied with the generated code, commit the changes to your version control system.
- Repeat from step 2 for further implementations.
-
###IMPLEMENT###: Marks sections for code implementation. You can provide detailed instructions after this comment. -
###NOUPLOAD###: Place this comment at the top of files containing sensitive or unneeded information. Files with this comment will not be sent to the LLM for processing during theimplementoperation unless the-fflag is used. This will still expose your file to the LLM during theannotate(always) ordocoperation (if using the-fflag).It is important to note that while the
###NOUPLOAD###comment prevents the full file content from being sent to the LLM during theimplementoperation, it does not provide complete protection against data exposure. The file will still be processed during theannotateoperation, which may use a local LLM for generating annotations. This annotation process is necessary to create the project index, which helps the LLM understand the project structure and write new code in context. While the annotation may leak some contextual information about the file, this can be mitigated with special summarization instructions (see theannotateoperation documentation for more details). Users should be aware of these limitations and take appropriate precautions when dealing with sensitive information. To ensure a file is never processed by the LLM, useproject.jsonconfiguration (see below).
Perpetual provides detailed logging of LLM interactions in the <project_root>/.perpetual/.message_log.txt file. This file contains an unformatted log of the actual messages exchanged between Perpetual and the LLM. The log provides a complete record of the communication and can be useful if you need to understand the exact content of the messages sent to the LLM.
To run the implement operation, use the following command:
Perpetual implement [flags]Supported flags:
-h: Display help information about theimplementoperation.-c <mode>: Context saving mode, reduce LLM context use for large projects (valid values: auto|off|medium|high).-df <file>: Optional path to project description file for adding into LLM context (valid values: file-path|disabled).-f: Disable 'no-upload' file-filter and upload such files for review and processing if requested.-i <file>: Read task instructions from file, plain text or markdown format. This flag automatically enables task mode (equivalent to using both-iand-tflags).-n: No annotate mode. Skip re-annotating changed files and use current annotations if any.-ni: No incremental mode. Disable using incremental 'search-and-replace' mode when generating file changes.-p: Enable planning, needed for bigger modifications that may create new files, not needed on single file modifications. Disabled by default to save tokens.-pr: Enable planning with additional reasoning. May produce improved results for complex or abstractly described tasks, but can also lead to flawed reasoning and worsen the final outcome. This flag includes the-pflag.-s <n>: Limit number of files related to the task returned by local search (0 = disable local search, only use LLM-requested files). This flag invokes an embedding operation and performs a local similarity search for files related to the implementation context.-sp <n>: Set number of passes for related files selection at stage 1 (default: 1). Higher pass-count values will select more files, compensating for possible LLM errors when finding relevant files, but it will cost you more tokens and context use.-t: Implement the task directly from instructions read from stdin (or file if -i flag is specified). This flag automatically enables planning mode (equivalent to using both-tand-pflags).-u: Do not exclude unit-tests source files from processing.-x <file>: Path to a user-supplied regex filter file for filtering out certain files from processing. See more info about using the filter here.-z: When using-por-prflags, do not enforce initial sources to file-lists produced by planning.-v: Enable debug logging for more detailed output.-vv: Enable debug and trace logging for maximum verbosity.
The implement operation is divided into four main stages.
- Run
annotateandembedto update project source code annotations and vector embeddings (unless-nflag is used or all project files are selected for processing). - Generate a project index containing filenames and their annotations.
- Create a request for files with
###IMPLEMENT###comments or based on task instructions. Query the LLM to identify which project source code files are relevant, also perform a local vector search to determine the relevant files that LLM may miss. - Return the list of files to review.
- Gathering Source Code: Collect source code from the files identified for review.
- Generating Reasonings: If planning with reasoning is enabled via
-pr, the LLM produces a detailed work plan outlining the steps required.
- Query LLM for File Modification: Determine which files will be modified or created.
- Parse the LLM's Response: Extract a list of files to modify or create, ensuring alignment with project structure.
- Gather Source Code and Work Plan: Use the relevant files and work plan as guidance.
- Iterative Processing of Each File:
- Query the LLM to produce implemented code.
- Handle partial responses and continue if token limits are reached.
- Parse and store the generated code for each file.
- Integration: Integrate the generated code into the appropriate files, replacing
###IMPLEMENT###comments and ensuring seamless incorporation.
When working with large projects, Perpetual employs several context saving measures to manage LLM context limits. This is mainly needed to make annotations/tasks/source-code analysis API calls fit LLM model context window size limits on stage 1 when working with large projects.
The -c flag allows you to control context saving behavior:
auto(default): Automatically enables context saving based on project file count thresholdsoff: Disables all context saving measuresmedium: Enables moderate context saving regardless of project sizehigh: Enables aggressive context saving regardless of project size
Context saving is automatically triggered based on project file count:
- Medium Context Saving: Activated when project exceeds 400 files (configurable via
medium_context_saving_file_countinproject.json) - High Context Saving: Activated when project exceeds 1200 files (configurable via
high_context_saving_file_countinproject.json)
-
Shorter Annotations: When context saving is enabled, the
annotateoperation generates shorter, more concise file summaries to reduce context usage. Important: If you change context saving settings, you may need to manually regenerate annotations usingPerpetual annotate -fto ensure consistency. -
Project File Pre-selection: Uses local similarity search with embeddings to pre-select the most relevant files for processing:
- Medium Context Saving: Selects 60% of project files, with 25% randomized (configurable via
medium_context_saving_select_percentandmedium_context_saving_random_percent) - High Context Saving: Selects 30% of project files, with 20% randomized (configurable via
high_context_saving_select_percentandhigh_context_saving_random_percent)
- Medium Context Saving: Selects 60% of project files, with 25% randomized (configurable via
-
Multi-pass File Selection: The
-spflag enables multiple passes of file selection at Stage 1, helping compensate for potential LLM errors in identifying relevant files. Works with or without context saving measures enabled, however it will cost you more API calls and may lead to higher token usage.
- Embeddings Support: Context saving features require an embedding model to be configured in your
*.envfiles for local similarity search functionality - Annotation Updates: Ensure your project annotations are current when using context saving, as these are used for file relevance calculations. Annotations are not rebuilt automatically when starting to use context saving measures, so you need to rebuild them with
annotateoperation using-fflag to make all current annotations smaller (not only for the new or modified files)
For details, see the project.json configuration file description below.
The implement operation can be fine-tuned using environment variables in the .env file. These variables allow you to customize the behavior of the LLM used for code implementation. Key configuration options include:
-
LLM Provider
LLM_PROVIDER_OP_IMPLEMENT_STAGE1,LLM_PROVIDER_OP_IMPLEMENT_STAGE2,LLM_PROVIDER_OP_IMPLEMENT_STAGE3,LLM_PROVIDER_OP_IMPLEMENT_STAGE4: Specify the LLM provider for each stage of the implement operation.
-
Model Selection
ANTHROPIC_MODEL_OP_IMPLEMENT_STAGE1,ANTHROPIC_MODEL_OP_IMPLEMENT_STAGE2,ANTHROPIC_MODEL_OP_IMPLEMENT_STAGE3,ANTHROPIC_MODEL_OP_IMPLEMENT_STAGE4: Anthropic models for each stage.- Similar variables exist for OpenAI, Ollama, and Generic providers (e.g.,
OPENAI_MODEL_OP_IMPLEMENT_STAGE1,OLLAMA_MODEL_OP_IMPLEMENT_STAGE1,GENERIC_MODEL_OP_IMPLEMENT_STAGE1, etc.)
-
Token Limits
ANTHROPIC_MAX_TOKENS_OP_IMPLEMENT_STAGE1,ANTHROPIC_MAX_TOKENS_OP_IMPLEMENT_STAGE2,ANTHROPIC_MAX_TOKENS_OP_IMPLEMENT_STAGE3,ANTHROPIC_MAX_TOKENS_OP_IMPLEMENT_STAGE4: Set maximum tokens for each stage.- Similar variables exist for OpenAI, Ollama, and Generic providers.
-
JSON Structured Output Mode
JSON structured output mode is supported for Stages 1 and 3 for some LLM providers. This mode can be enabled to provide faster responses and slightly lower costs, and may produce better results sometimes when used with Ollama. To enable it for different providers, add to your.envfile:ANTHROPIC_FORMAT_OP_IMPLEMENT_STAGE1="json" ANTHROPIC_FORMAT_OP_IMPLEMENT_STAGE3="json" OPENAI_FORMAT_OP_IMPLEMENT_STAGE1="json" OPENAI_FORMAT_OP_IMPLEMENT_STAGE3="json" OLLAMA_FORMAT_OP_IMPLEMENT_STAGE1="json" OLLAMA_FORMAT_OP_IMPLEMENT_STAGE3="json"
-
Retry Settings
ANTHROPIC_ON_FAIL_RETRIES_OP_IMPLEMENT_STAGE1,ANTHROPIC_ON_FAIL_RETRIES_OP_IMPLEMENT_STAGE2,ANTHROPIC_ON_FAIL_RETRIES_OP_IMPLEMENT_STAGE3,ANTHROPIC_ON_FAIL_RETRIES_OP_IMPLEMENT_STAGE4: Specify retry attempts for each stage.- Similar variables exist for OpenAI, Ollama, and Generic providers.
-
Temperature
ANTHROPIC_TEMPERATURE_OP_IMPLEMENT_STAGE1,ANTHROPIC_TEMPERATURE_OP_IMPLEMENT_STAGE2,ANTHROPIC_TEMPERATURE_OP_IMPLEMENT_STAGE3,ANTHROPIC_TEMPERATURE_OP_IMPLEMENT_STAGE4: Set temperature for each stage.- Similar variables exist for OpenAI, Ollama, and Generic providers.
-
Other LLM Parameters
TOP_K,TOP_P,SEED,REPEAT_PENALTY,FREQ_PENALTY,PRESENCE_PENALTY: Can be set for each stage by appending_OP_IMPLEMENT_STAGE<NUMBER>.
Example configuration in .env file:
LLM_PROVIDER_OP_IMPLEMENT_STAGE1="anthropic"
LLM_PROVIDER_OP_IMPLEMENT_STAGE2="anthropic"
LLM_PROVIDER_OP_IMPLEMENT_STAGE3="anthropic"
LLM_PROVIDER_OP_IMPLEMENT_STAGE4="anthropic"
ANTHROPIC_MODEL_OP_IMPLEMENT_STAGE1="claude-sonnet-4-20250514"
ANTHROPIC_MODEL_OP_IMPLEMENT_STAGE2="claude-sonnet-4-20250514"
ANTHROPIC_MODEL_OP_IMPLEMENT_STAGE3="claude-sonnet-4-20250514"
ANTHROPIC_MODEL_OP_IMPLEMENT_STAGE4="claude-sonnet-4-20250514"
ANTHROPIC_MAX_TOKENS_OP_IMPLEMENT_STAGE1="1024"
ANTHROPIC_MAX_TOKENS_OP_IMPLEMENT_STAGE2="4096"
ANTHROPIC_MAX_TOKENS_OP_IMPLEMENT_STAGE3="1024"
ANTHROPIC_MAX_TOKENS_OP_IMPLEMENT_STAGE4="32768"
ANTHROPIC_TEMPERATURE_OP_IMPLEMENT_STAGE1="0.2"
ANTHROPIC_TEMPERATURE_OP_IMPLEMENT_STAGE2="1"
ANTHROPIC_TEMPERATURE_OP_IMPLEMENT_STAGE3="0.2"
ANTHROPIC_TEMPERATURE_OP_IMPLEMENT_STAGE4="0.5"
ANTHROPIC_ON_FAIL_RETRIES_OP_IMPLEMENT_STAGE1="2"
ANTHROPIC_ON_FAIL_RETRIES_OP_IMPLEMENT_STAGE2="7"
ANTHROPIC_ON_FAIL_RETRIES_OP_IMPLEMENT_STAGE3="7"
ANTHROPIC_ON_FAIL_RETRIES_OP_IMPLEMENT_STAGE4="10"Customization of LLM prompts for the implement operation is handled through the .perpetual/op_implement.json configuration file. This file is populated using the init operation, which sets up default language-specific prompts tailored to your project's needs. You may modify it in case of problems, but normally you should not change it unless you are adapting prompts for a programming language or project type not supported by Perpetual.
The prompt configuration is organized by stages, with each stage having specific prompts for different scenarios. The configuration allows for fine-tuning how the LLM processes requests at each stage of the implementation workflow.
Stage 1 is responsible for analyzing the project context and identifying relevant files for code implementation. It creates a project index using file annotations and determines which additional files should be reviewed to provide proper context for the implementation task.
stage1_analysis_prompt: Main prompt for regular mode that asks the LLM to identify which project files are relevant for implementing the specified tasks.stage1_analysis_json_mode_prompt: Alternative version of the analysis prompt designed for JSON structured output mode, providing the same functionality with formatted output.stage1_task_analysis_prompt: Analysis prompt specifically for task mode when implementation instructions are provided directly rather than through###IMPLEMENT###comments.stage1_task_analysis_json_mode_prompt: JSON-formatted version of the task analysis prompt for structured output in task mode.
Stage 2 handles the gathering of source code context and, optionally, the generation of implementation reasoning or work plans. This stage prepares the foundation for subsequent stages by organizing relevant information and creating detailed plans when needed.
code_prompt: Prompt for reviewing and analyzing the source code files identified in Stage 1, providing context for implementation decisions.code_response: Simulated response acknowledging the code review completion.stage2_noplanning_prompt: Prompt used when planning mode is disabled (-pand-prflags not used), requesting direct implementation without detailed planning.stage2_noplanning_response: Simulated response for the no-planning mode.stage2_reasonings_prompt: Prompt for generating detailed reasoning and work plans when planning with reasoning mode is enabled (-prflag).stage2_reasonings_prompt_final: Final prompt used after reasoning generation to prepare for subsequent stages. This is a simplified version of the previous prompt that is actually used in LLM message history on next stages instead of the full prompt to draw LLM attention away from unneeded instructions.stage2_task_reasonings_prompt: Reasoning prompt specifically for task mode implementations.stage2_task_reasonings_prompt_final: Final reasoning prompt for task mode. This is a simplified version of the previous prompt that is actually used in LLM message history on next stages instead of the full prompt to draw LLM attention away from unneeded instructions.
Stage 3 determines which files will be modified or created during the implementation process. This stage analyzes the requirements and work plan to produce a comprehensive list of files that need changes.
stage3_planning_prompt: Main prompt for determining file modifications when using planning mode with full file content analysis.stage3_planning_json_mode_prompt: JSON-structured version of the planning prompt for providers that support structured output.stage3_planning_lite_prompt: Simplified planning prompt used when reasoning has already been generated in Stage 2, requiring less detailed analysis.stage3_planning_lite_json_mode_prompt: JSON version of the simplified planning prompt.stage3_task_planning_prompt: Planning prompt specifically designed for task mode implementations.stage3_task_planning_json_mode_prompt: JSON-structured version of the task planning prompt.stage3_task_extra_files_prompt: Additional prompt used in task mode when extra files need to be included in the context to prevent overwriting existing code.
Stage 4 performs the actual code implementation, processing each target file individually and generating the final code based on all previous analysis and planning.
stage4_changes_done_prompt: Prompt that provides context about previously implemented files during iterative processing.stage4_changes_done_response: Simulated response acknowledging the completed changes.stage4_continue_prompt: Prompt used when the LLM response reaches token limits and needs to continue generating the remaining code.stage4_process_prompt: Main prompt for implementing code in a specific file, with placeholders for file-specific information.stage4_process_incremental_prompt: Prompt for incremental search-and-replace mode implementation, which can be more efficient for large files.
System-level configuration options that apply across all stages:
system_prompt: The primary system prompt that establishes the LLM's role and general behavior for the implement operation.system_prompt_ack: Acknowledgment response that the LLM should provide to confirm understanding of the system prompt.filename_embed_rx: Regular expression pattern used to embed the filename into file implementation requests.implement_comments_rx: Regular expressions to detect###IMPLEMENT###comments.stage1_output_key,stage1_output_schema,stage1_output_schema_name,stage1_output_schema_desc: Parameters for JSON-structured output in Stage 1.stage3_output_key,stage3_output_schema,stage3_output_schema_name,stage3_output_schema_desc: Parameters for JSON-structured output in Stage 3.
Global project configuration is handled through the .perpetual/project.json configuration file. It defines which source code files are targets for processing with Perpetual and which are not. Update the paths and regex patterns used for project file selection to fit your specific project requirements.
The following configuration keys control which files are included or excluded from processing:
-
project_files_whitelist: An array of regular expressions that define which files should be included for processing. Only files matching these patterns will be considered by Perpetual operations. This allows you to focus on specific file types or directories. -
project_files_blacklist: An array of regular expressions that define which files should be excluded from processing. Files matching these patterns will be filtered out even if they match the whitelist patterns. Use this to exclude configuration files, build artifacts, or other files that shouldn't be processed. -
project_test_files_blacklist: An array of regular expressions specifically for identifying unit test files. These files will be excluded from processing by default unless the-uflag is used. This helps keep test files separate from main source code during analysis. -
files_to_md_code_mappings: A 2D array that maps file extensions to markdown code block language identifiers. This helps the LLM properly format code blocks when presenting source code. For example,[".go", "go"]maps Go files to the "go" language identifier in markdown. -
noupload_comments_rx: Regular expressions to detect###NOUPLOAD###comments in files.
Context saving thresholds and percentages can be customized in your project.json configuration file:
-
medium_context_saving_file_count: The threshold number of files that triggers medium context saving mode automatically when using theautocontext saving setting. -
high_context_saving_file_count: The threshold number of files that triggers high context saving mode automatically when using theautocontext saving setting. -
medium_context_saving_select_percent: Percentage of project files to select during medium context saving mode (default: 60%). -
medium_context_saving_random_percent: Percentage of selected files that should be chosen randomly during medium context saving mode (default: 25%). -
high_context_saving_select_percent: Percentage of project files to select during high context saving mode (default: 30%). -
high_context_saving_random_percent: Percentage of selected files that should be chosen randomly during high context saving mode (default: 20%).
Configuration for incremental search-and-replace mode in Stage 4:
files_incremental_mode_min_len: A 2D array that maps file patterns to minimum file sizes for incremental mode. Files smaller than the specified size will use full-file processing instead of incremental mode.files_incremental_mode_rx: Regular expressions used to parse incremental search-and-replace blocks from LLM responses.
project_index_prompt: Initial prompt that presents the project file index with annotations to give the LLM an overview of the project structure and content.project_index_response: Simulated LLM response acknowledging receipt of the project index.project_description_prompt: Prompt for providing project description context to the LLM.project_description_response: Simulated response acknowledging project description.filename_tags: Tags used to denote filenames in messages.filename_tags_rx: Regular expressions to parse filename tags.code_tags_rx: Regular expressions to identify code blocks in responses.
- Clear Instructions: Provide detailed and clear instructions in your
###IMPLEMENT###comments or task instructions. - Incremental Implementation: Break complex features into smaller tasks for easier review and iteration.
- Regular Code Reviews: Always review the generated code carefully.
- Version Control: Use version control systems and the
stashoperation to manage and revert changes. - Consistent Coding Style: Maintain a consistent style to help the LLM match your existing code.
- Good Project Architecture: A clear, modular architecture yields better LLM results.
- Use Planning Flags: For extensive changes, enable
-por-prfor thorough planning. - Use
###NOUPLOAD###with Awareness: Prevent large or sensitive files from being uploaded, but configureproject.jsonto fully exclude files if needed. - Iterative Refinement: Refine comments or task instructions and re-run the operation as needed.
- Fine-tune LLM Settings: Experiment with environment settings for your LLM provider, consult the
*.env.examplefiles at<project_root>/.perpetual/*.env.example.
- LLM Query Failures: Retries up to the number specified in
<PROVIDER>_ON_FAIL_RETRIES_OP_IMPLEMENT_STAGE<NUMBER>. - Token Limit Handling: Continues generation if the LLM response reaches the token limit.
- Invalid Responses: Checks for properly formatted code blocks and retries if no valid code is found.
The implement operation can consume significant time (when using a locally running LLM) or incur costs when using commercial LLM providers, especially for large projects or complex tasks. Consider the following to optimize performance:
- Use Appropriate Models: Choose LLM models/providers that balance capability and speed. For example, using a smaller model for Stages 1 and 3 and more powerful models for Stages 2 and 4. You may also try using small local models with Ollama for the
annotateoperation to save on costs associated with auto re-annotating changed files. - Do Not Use
-por-prFlags Unless Needed: You may significantly save on LLM API calls, tokens, and costs by not using these flags if you believe that the implementation won't produce any new files or cause changes in other files not marked with###IMPLEMENT###comments. - Incremental Implementation: For large projects, implement changes in smaller, manageable chunks rather than attempting to modify the entire codebase at once.
- Use the
-uFlag: If your project contains unit-tests source code files that are relevant to the implementation task, use the-uflag to include them in processing (this will disable the filter for such files). This provides additional context for the LLM and allows it to see and modify unit-tests. However, be aware that including tests will increase the amount of code the LLM needs to analyze, which may increase costs. - Custom File Filtering: For more fine-grained control over which files are processed, use the
-xflag with a custom regex filter file. This allows you to exclude specific files or file types that are not relevant to your current implementation task, reducing your costs. - Incremental Mode: The
-niflag disables incremental search-and-replace mode, which can be useful for troubleshooting but may increase token usage for large files. Generally, incremental mode is more efficient and should be left enabled unless you encounter issues.