Blog

Weekly Update – March 30, 2026

Six releases shipped in github/gh-aw between March 24 and March 30 — that’s almost one a day. From expanded audit tooling to integrity-isolated cache storage and a wave of security fixes, this was a dense week. Here’s the rundown.

The freshest release ships with quality-of-life wins for workflow authors:

  • runs-on-slim for compile-stable jobs (#23490): Override the runner for compile-stable framework jobs with a new runs-on-slim key, giving you fine-grained control over which machine handles compilation.
  • Sibling nested imports fixed (#23475): ./file.md imports now resolve relative to the importing file’s directory, not the working directory. Modular workflows that import sibling files were silently broken before — now they’re not.
  • Custom tools in <safe-output-tools> prompt (#23487): Custom jobs, scripts, and actions are now listed in the agent’s <safe-output-tools> prompt block so the AI actually knows they exist.
  • Compile-time validation of safe-output job ordering (#23486): Misconfigured needs: ordering on custom safe-output jobs is now caught at compile time.
  • MCP Gateway v0.2.9 (#23513) and firewall v0.25.4 (#23514) bumped for all compiled workflows.

A security-heavy release with one major architectural upgrade:

Integrity-aware cache-memory is the headline. Cache storage now uses dedicated git branches — merged, approved, unapproved, and none — to enforce integrity isolation at the storage level. A run operating at unapproved integrity can no longer read data written by a merged-integrity run, and any change to your allow-only guard policy automatically invalidates stale cache entries. If you upgrade and see a cache miss on your first run, that’s intentional — legacy data has no integrity provenance and must be regenerated.

patch-format: bundle (#23338) is the other highlight: code-push flows now support git bundle as an alternative to git am, preserving merge commits, authorship, and per-commit messages that were previously dropped.

Security fixes:

  • Secret env var exclusion (#23360): AWF now strips all secret-bearing env vars (tokens, API keys, MCP secrets) from the agent container’s visible environment, closing a potential prompt-injection exfiltration path in pull_request_target workflows.
  • Argument injection fix (#23374): Package and image names in gh aw compile --validate-packages are validated before being passed to npm view, pip index versions, uv pip show, and docker.

The gh aw logs command gained cross-run report generation via the new --format flag:

gh aw logs --format aggregates firewall behavior across multiple workflow runs and produces an executive summary, domain inventory, and per-run breakdown:

Terminal window
gh aw logs agent-task --format markdown --count 10 # Markdown
gh aw logs --format markdown --json # JSON for dashboards
gh aw logs --format pretty # Console output

This release also includes a YAML env injection security fix (#23055): all env: emission sites in the compiler now use %q-escaped YAML scalars, preventing newlines or quote characters in frontmatter values from injecting sibling env variables into .lock.yml files.

gh aw audit diff (#22996) lets you compare two workflow runs side-by-side — firewall behavior, MCP tool invocations, token usage, and duration — to spot regressions and behavioral drift before they become incidents:

Terminal window
gh aw audit diff <run1> <run2> --format markdown

Five new sections also landed in the standard gh aw audit report: Engine Configuration, Prompt Analysis, Session & Agent Performance, Safe Output Summary, and MCP Server Health. One report now gives you the full picture.

Bot-actor concurrency isolation: Workflows combining safe-outputs.github-app with issue_comment-capable triggers now automatically get bot-isolated concurrency keys, preventing the workflow from cancelling itself mid-run when the bot posts a comment that re-triggers the same workflow.

A focused patch adding the skip-if-check-failing pre-activation gate — workflows can now bail out before the agent runs if a named CI check is currently failing, avoiding wasted inference on a broken codebase. Also ships an improved fuzzy schedule algorithm with weighted preferred windows and peak avoidance to reduce queue contention on shared runners.


The self-appointed gatekeeper of the issue tracker — reads every new issue and assigns labels so the right people see it.

This week, auto-triage-issues handled three runs. Two of them were textbook efficiency: triggered the moment a new issue landed, ran the pre-activation check, decided there was nothing worth labeling, and wrapped up in under 42 seconds flat. No fuss, no drama. Then came the Monday scheduled sweep. That run went a different direction: 18 turns, 817,000 tokens, and after all that contemplation… a failure. Somewhere between turn one and turn eighteen, the triage workflow decided this batch of issues deserved its most thoughtful analysis yet, burned through a frontier model’s patience, and still couldn’t quite close the loop.

It’s the classic overachiever problem — sometimes the issues that look the simplest turn out to be the ones that take all day.

Usage tip: If your auto-triage-issues scheduled runs are consistently expensive, the new agentic_fraction metric in gh aw audit can help you identify which turns are pure data-gathering and could be moved to deterministic shell steps.

View the workflow on GitHub


Update to v0.64.4 today with gh extension upgrade aw. The integrity-aware cache-memory migration will trigger a one-time cache miss on first run — expected and safe. As always, questions and contributions are welcome in github/gh-aw.

Weekly Update – March 23, 2026

Another week, another flurry of releases in github/gh-aw. Eight versions shipped between March 18 and March 21, pushing security hardening, extensibility, and performance improvements across the board. Here’s what you need to know.

The latest release leads with two important security fixes:

  • Supply chain protection: The Trivy vulnerability scanner action was removed after a supply chain compromise was discovered (#22007, #22065). Scanning has been replaced with a safer alternative.
  • Public repo integrity hardening (#21969): GitHub App authentication no longer exempts public repositories from the minimum-integrity guard policy, closing a gap where untrusted content could bypass integrity checks.

On the feature side:

  • Timezone support for on.schedule (#22018): Cron entries now accept an optional timezone field — finally, no more mental UTC arithmetic when you want your workflow to run “at 9 AM Pacific”.
  • Boolean expression optimizer (#22025): Condition trees are optimized at compile time, generating cleaner if: expressions in compiled workflows.
  • Wildcard target-repo in safe-output handlers (#21877): Use target-repo: "*" to write a single handler definition that works across any repository.

This one is a standout for extensibility and speed:

  • Custom Actions as Safe Output Tools (#21752): You can now expose any GitHub Action as an MCP tool via the new safe-outputs.actions block. The compiler resolves action.yml at compile time to derive the tool schema and inject it into the agent — no custom wiring needed. This opens the door to a whole ecosystem of reusable safe-output handlers built from standard Actions.
  • ~20 seconds faster per workflow run (#21873): A bump to DefaultFirewallVersion v0.24.5 eliminates a 10-second shutdown delay for both the agent container and the threat detection container. That’s 20 free seconds on every single run.
  • trustedBots support in MCP Gateway (#21865): Pass an allowlist of additional GitHub bot identities to the MCP Gateway, enabling safe cross-bot collaboration in guarded environments.
  • gh-aw-metadata v3 (#21899): Lock files now embed the configured agent ID/model in the gh-aw-metadata comment, making audits much easier.

! Breaking change alert: lockdown: true is gone. It has been replaced by the more expressive min-integrity field. If you have lockdown: false in your frontmatter, remove it — it’s no longer recognized. The new integrity-level system gives you finer control over what content can trigger your workflows.

This release also introduces integrity filtering for log analysis — the gh aw logs command can now filter to only runs where DIFC integrity events were triggered, making security investigations much faster.

The GitHub MCP guard policy graduates to general availability. The policy automatically configures appropriate access controls on the GitHub MCP server at runtime — no manual lockdown configuration required. Also new: inline custom safe-output scripts, letting you define JavaScript handlers directly in your workflow frontmatter without a separate file.

Three patch releases covered:

  • Signed-commit support for protected branches (v0.61.1)
  • Broader ecosystem domain coverage for language package registries (v0.61.2)
  • Critical workflow_dispatch expression evaluation fix (v0.61.2)

Several important fixes landed today (March 23):

Your tireless four-hourly guardian of PR quality — reads every open pull request and evaluates it against CONTRIBUTING.md for compliance and completeness.

contribution-check ran five times this week (once every four hours, as scheduled) and processed a steady stream of incoming PRs, creating issues for contributors who needed guidance, adding labels, and leaving review comments. Four of five runs completed in under 5 minutes with 6–9 turns. The fifth run, however, apparently found the task of reviewing PRs during a particularly active Sunday evening so intellectually stimulating that it worked through 50 turns and consumed 1.55 million tokens — roughly 5× its usual appetite — before the safe_outputs step politely called it a night. It still managed to file issues, label PRs, and post comments on the way out. Overachiever.

One earlier run also hit a minor hiccup: the pre-agent filter step forgot to write its output file, leaving the agent with nothing to evaluate. Rather than fabricating a list of PRs to review, it dutifully reported “missing data” and moved on. Sometimes the bravest thing is knowing when there’s nothing to do.

Usage tip: The contribution-check pattern works best when your CONTRIBUTING.md is explicit and opinionated — the more specific your guidelines, the more actionable its feedback will be for contributors.

View the workflow on GitHub

Update to v0.62.5 to pick up the security fixes and timezone support. If you’ve been holding off on migrating from lockdown: true, now’s the time — check the v0.62.2 release notes for the migration path. As always, contributions and feedback are welcome in github/gh-aw.

Weekly Update – March 18, 2026

It’s been a busy week in github/gh-aw — seven releases shipped between March 13 and March 17, covering everything from a security model overhaul to a new label-based trigger and a long-overdue terminal resize fix. Let’s dig in.

The freshest release focuses on reliability and developer experience:

  • Automatic debug logging (#21406): Set ACTIONS_RUNNER_DEBUG=true on your runner and full debug logging activates automatically — no more manually adding DEBUG=* to every troubleshooting run.
  • Cross-repo project item updates (#21404): update_project now accepts a target_repo parameter, so org-level project boards can update fields on items from any repository.
  • GHE Cloud data residency support (#21408): Compiled workflows now auto-inject a GH_HOST step, fixing gh CLI failures on *.ghe.com instances.
  • CI build artifacts (#21440): The build CI job now uploads the compiled gh-aw binary as a downloadable artifact — handy for testing PRs without a local build.

This release rewires the security model. Breaking change: automatic lockdown=true is gone. Instead, the runtime now auto-configures guard policies on the GitHub MCP server — min_integrity=approved for public repos, min_integrity=none for private/internal. Remove any explicit lockdown: false from your frontmatter; it’s no longer needed.

Other highlights:

  • GHES domain auto-allowlisting (#21301): When engine.api-target points to a GHES instance, the compiler automatically adds GHES API hostnames to the firewall. No more silent blocks after every recompile.
  • github-app: auth in APM dependencies (#21286): APM dependencies: can now use github-app: auth for cross-org private package access.

A feature-packed release with two breaking changes (field renames in safe-outputs.allowed-domains) and several new capabilities:

  • Label Command Trigger (#21118): Activate a workflow by adding a label to an issue, PR, or discussion. The label is automatically removed so it can be reapplied to re-trigger.
  • gh aw domains command (#21086): Inspect the effective network domain configuration for all your workflows, with per-domain ecosystem annotations.
  • Pre-activation step injection — New on.steps and on.permissions frontmatter fields let you inject custom steps and permissions into the activation job for advanced scenarios.
  • v0.58.3 (March 15): MCP write-sink guard policy for non-GitHub MCP servers, Copilot pre-flight diagnostic for GHES, and a richer run details step summary.
  • v0.58.2 (March 14): GHES auto-detection in audit and add-wizard, excluded-files support for create-pull-request, and clearer run command errors.
  • v0.58.1 / v0.58.0 (March 13): call-workflow safe output for chaining workflows, checkout: false for agent jobs, custom OpenAI/Anthropic API endpoints, and 92 merged PRs in v0.58.0 alone.
  • Top-level github-app fallback (#21510): Define your GitHub App config once at the top level and let it propagate to safe-outputs, checkout, MCP, APM, and activation — instead of repeating it in every section.
  • GitHub App-only permission scopes (#21511): 31 new PermissionScope constants cover repository, org, and user-level GitHub App permissions (e.g., administration, members, environments).
  • Custom Huh theme (#21557): All 11 interactive CLI forms now use a Dracula-inspired theme consistent with the rest of the CLI’s visual identity.
  • Weekly blog post writer workflow (#21575): Yes, the workflow that wrote this post was itself merged this week. Meta!
  • CI job timeout limits (#21601): All 25 CI jobs that relied on GitHub’s 6-hour default now have explicit timeouts, preventing a stuck test from silently burning runner compute.

The first-ever Agent of the Week goes to the workflow that handles the unglamorous but essential job of keeping the issue tracker from becoming a swamp.

auto-triage-issues runs on a schedule and fires on every new issue, reading each one and deciding how to categorize it. This week it ran five times — three successful runs and two that were triggered by push events to a feature branch (which apparently fire the workflow but don’t give it much to work with). On its scheduled run this morning, it found zero open issues in the repository, so it created a tidy summary discussion to announce the clean state, as instructed. On an earlier issues-triggered run, it attempted to triage issue #21572 but hit empty results from GitHub MCP tools on all three read attempts — so it gracefully called missing_data and moved on rather than hallucinating a label.

Across its recent runs it made 131 search_repositories calls. We’re not sure why it finds repository searches so compelling, but clearly it’s very thorough about knowing its neighborhood before making any decisions.

Usage tip: Pair auto-triage-issues with a notify workflow on specific labels (e.g., security or needs-repro) so the right people get pinged automatically without anyone having to watch the inbox.

View the workflow on GitHub

Update to v0.61.0 to get all the improvements from this packed week. If you run workflows on GHES or in GHE Cloud, the new auto-detection and GH_HOST injection features are especially worth trying. As always, contributions and feedback are welcome in github/gh-aw.

Meet the Workflows: Project Coordination

Peli de Halleux

My dear friends, we’ve arrived at the grand finale - the most spectacular room of all in Peli’s Agent Factory!

We’ve journeyed through 18 categories of workflows - from triage bots to code quality improvers, from security guards to creative poets, culminating in advanced analytics that use machine learning to understand agent behavior patterns. Each workflow handles its individual task admirably.

But here’s the ultimate challenge: how do you coordinate multiple agents working toward a shared goal? How do you break down a large initiative like “migrate all workflows to a new engine” into trackable sub-tasks that different agents can tackle? How do you monitor progress, alert on delays, and ensure the whole is greater than the sum of its parts? This final post explores planning, task-decomposition and project coordination workflows - the orchestration layer that proves AI agents can handle not just individual tasks, but entire structured projects requiring careful coordination and progress tracking.

These agents coordinate multi-agent plans and projects:

  • Plan Command - Breaks down issues into actionable sub-tasks via /plan command - 514 merged PRs out of 761 proposed (67% merge rate)
  • Discussion Task Miner - Extracts actionable tasks from discussion threads - 60 merged PRs out of 105 proposed (57% merge rate)

Plan Command has contributed 514 merged PRs out of 761 proposed (67% merge rate), providing on-demand task decomposition that breaks complex issues into actionable sub-tasks. This is the highest-volume workflow by attribution in the entire factory. Developers can comment /plan on any issue to get an AI-generated breakdown into actionable sub-issues that agents can work on. A verified example causal chain: Discussion #7631Issue #8058PR #8110.

Discussion Task Miner has contributed 60 merged PRs out of 105 proposed (57% merge rate), continuously scanning discussions to extract actionable tasks that might otherwise be lost. The workflow demonstrates perfect causal chain attribution: when it creates an issue from a discussion, and Copilot Coding Assistant later fixes that issue, the resulting PR is correctly attributed to Discussion Task Miner. A verified example: Discussion #13934Issue #14084PR #14129. Recent merged examples include fixing firewall SSL-bump field extraction and adding security rationale to permissions documentation.

We learned that individual agents are great at focused tasks, but orchestrating multiple agents toward a shared goal requires careful architecture. Project coordination isn’t just about breaking down work - it’s about discovering work (Task Miner), planning work (Plan Command), and tracking work (Workflow Health Manager).

These workflows implement patterns like epic issues, progress tracking, and deadline management. They prove that AI agents can handle not just individual tasks, but entire projects when given proper coordination infrastructure.

You can add these workflows to your own repository and remix them. Get going with our Quick Start, then run one of the following:

Plan Command:

Terminal window
gh aw add-wizard https://github.com/github/gh-aw/blob/v0.45.5/.github/workflows/plan.md

Discussion Task Miner:

Terminal window
gh aw add-wizard https://github.com/github/gh-aw/blob/v0.45.5/.github/workflows/discussion-task-miner.md

Then edit and remix the workflow specifications to meet your needs, regenerate the lock file using gh aw compile, and push to your repository. See our Quick Start for further installation and setup instructions.

You can also create your own workflows.


Throughout this 19-part journey, we’ve explored workflows spanning from simple triage bots to sophisticated multi-phase improvers, from security guards to creative poets, from individual task automation to organization-wide orchestration.

The key insight? AI agents are most powerful when they’re specialized, well-coordinated, and designed for their specific context. No single agent does everything - instead, we have an ecosystem where each agent excels at its particular job, and they work together through careful orchestration.

We’ve learned that observability is essential, that incremental progress beats heroic efforts, that security needs careful boundaries, and that even “fun” workflows can drive meaningful engagement. We’ve discovered that AI agents can maintain documentation, manage campaigns, analyze their own behavior, and continuously improve codebases - when given the right architecture and guardrails.

As you build your own agentic workflows, remember: start small, measure everything, iterate based on real usage, and don’t be afraid to experiment. The workflows we’ve shown you evolved through experimentation and real-world use. Yours will too.

This is part 19 (final) of a 19-part series exploring the workflows in Peli’s Agent Factory.

Meet the Workflows: Advanced Analytics & ML

Peli de Halleux

Ooh! Time to plunge into the data wonderland at Peli’s Agent Factory! Where numbers dance and patterns sing!

In our previous post, we explored organization and cross-repo workflows that operate at enterprise scale - analyzing dozens of repositories together to find patterns and outliers that single-repo analysis would miss. We learned that perspective matters: what looks normal in isolation might signal drift at scale.

Beyond tracking basic metrics (run time, cost, success rate), we wanted deeper insights into how our agents actually behave and how developers interact with them. What patterns emerge from thousands of agent prompts? What makes some PR conversations more effective than others? How do usage patterns reveal improvement opportunities? This is where we brought out the big guns: machine learning, natural language processing, sentiment analysis, and clustering algorithms. Advanced analytics workflows don’t just count things - they understand them, finding patterns and insights that direct observation would never reveal.

These agents use sophisticated analysis techniques to extract insights:

Prompt Clustering Analysis has created 27 analysis discussions using ML to categorize thousands of agent prompts - for example, #6918 clustering agent prompts to identify patterns and optimization opportunities. It revealed patterns we never noticed (“oh, 40% of our prompts are about error handling”).

Copilot PR NLP Analysis applies natural language processing to PR conversations, performing sentiment analysis and identifying linguistic patterns across agent interactions. It found that PRs with questions in the title get faster review.

Copilot Session Insights has created 32 analysis discussions examining Copilot coding agent usage patterns and metrics across the workflow ecosystem. It identifies common patterns and failure modes.

Copilot Coding Agent Analysis has created 48 daily analysis discussions providing deep analysis of agent behavior patterns - for example, #6913 with the daily Copilot coding agent analysis.

What we learned: meta-analysis is powerful - using AI to analyze AI systems reveals insights that direct observation misses. These workflows helped us understand not just what our agents do, but how they behave and how users interact with them.

You can add these workflows to your own repository and remix it as follows:

Copilot Session Insights:

Terminal window
gh aw add-wizard https://github.com/github/gh-aw/blob/v0.45.5/.github/workflows/copilot-agent-analysis.md

Copilot PR NLP Analysis:

Terminal window
gh aw add-wizard https://github.com/github/gh-aw/blob/v0.45.5/.github/workflows/copilot-pr-nlp-analysis

Prompt Clustering Analysis:

Terminal window
gh aw add-wizard https://github.com/github/gh-aw/blob/v0.45.5/.github/workflows/prompt-clustering-analysis.md

Copilot Agent Analysis:

Terminal window
gh aw add-wizard https://github.com/github/gh-aw/blob/v0.45.5/.github/workflows/copilot-agent-analysis.md

Then edit and remix the workflow specifications to meet your needs, regenerate the lock file using gh aw compile, and push to your repository. See our Quick Start for further installation and setup instructions.

You can also create your own workflows.

We’ve reached the final stop: coordinating multiple agents toward shared, complex goals across extended timelines.

Continue reading: Project Coordination Workflows →


This is part 18 of a 19-part series exploring the workflows in Peli’s Agent Factory.