Compare commits

..

35 Commits

Author SHA1 Message Date
Boris Cherny
d742413949 feat: Add comment-on-duplicates script for safer duplicate handling
Replace `gh issue comment:*` permission with a constrained script that:
- Only accepts validated issue numbers
- Enforces max 3 duplicates
- Uses a fixed comment format
- Prevents arbitrary comment content injection

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-06 18:46:12 -08:00
Boris Cherny
817e8c1573 fix(security): Remove overly broad gh api permission from dedupe command
Remove `Bash(gh api:*)` from dedupe.md allowed-tools to prevent potential
secret exfiltration via prompt injection. The dedupe workflow only needs
gh issue view/list/comment and gh search commands - it doesn't require
raw API access.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-06 18:37:38 -08:00
Boris Cherny
128de2a75d feat: Add explanatory-output-style and learning-output-style plugins to marketplace
Added two missing plugins to the marketplace.json:
- explanatory-output-style: Adds educational insights about implementation choices
- learning-output-style: Interactive learning mode that requests code contributions

Both plugins are categorized under "learning" to help users discover educational tools.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 16:15:21 -07:00
Boris Cherny
b8a98a8df7 Merge pull request #10826 from anthropics/boris/rmbx
feat: Add learning-output-style plugin
2025-11-01 15:55:57 -07:00
claude[bot]
ba49573fe1 feat: Incorporate explanatory functionality into learning-output-style plugin
- Update session-start.sh to include explanatory insights alongside learning mode
- Add educational insight formatting with ★ Insight sections
- Update README.md to clarify differences from unshipped Learning output style
- Document that this plugin now combines both learning and explanatory functionality
- Address review feedback about incorporating explanatory-output-style features

Co-authored-by: Boris Cherny <bcherny@users.noreply.github.com>
2025-11-01 22:37:17 +00:00
Boris Cherny
015808d89c feat: Add learning-output-style plugin
Add interactive learning mode plugin that requests meaningful code contributions at decision points. Based on the unshipped Learning output style, this plugin engages users in active learning by having them write 5-10 lines of code for business logic, error handling, and design decisions.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 15:26:34 -07:00
GitHub Actions
ae411f8461 chore: Update CHANGELOG.md 2025-11-01 00:47:17 +00:00
GitHub Actions
4310085cb5 chore: Update CHANGELOG.md 2025-10-31 23:27:06 +00:00
GitHub Actions
c509821adc chore: Update CHANGELOG.md 2025-10-31 21:57:10 +00:00
GitHub Actions
d9aa4cf649 chore: Update CHANGELOG.md 2025-10-31 16:02:25 +00:00
GitHub Actions
b935da77db chore: Update CHANGELOG.md 2025-10-30 23:32:01 +00:00
Dickson Tsai
0c7d02b56f Merge pull request #10495 from anthropics/dickson/explanatory-output-style
Implement Explanatory output style as a plugin
2025-10-29 11:04:44 -07:00
Dickson Tsai
8b47e224a0 Editorial changes 2025-10-29 08:37:48 -07:00
Dickson Tsai
21bbc9f250 Merge pull request #10445 from stbenjam/lints
Add missing plugin.json files to fix claudelint errors
2025-10-29 08:19:27 -07:00
Catherine Wu
7add6863a0 Merge pull request #10076 from anthropics/add-oncall-triage-workflow
Add oncall triage slash command for issue management
2025-10-28 20:12:35 -07:00
Cat Wu
5484a86d28 Increase oncall triage engagement threshold to 50
Updates the oncall triage automation to require 50+ engagements
(comments + reactions) before applying the oncall label, making the
criteria more conservative to focus on the most critical issues.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-28 20:04:53 -07:00
Dickson Tsai
10e1d3fe77 Implement a plugin as alternative for deprecated Explanatory output style 2025-10-28 02:39:56 -07:00
GitHub Actions
4dc23d0275 chore: Update CHANGELOG.md 2025-10-28 00:45:49 +00:00
GitHub Actions
8077cdc68c chore: Update CHANGELOG.md 2025-10-27 21:29:34 +00:00
Stephen Benjamin
207b22de65 Add missing plugin.json files to fix claudelint errors
Added plugin metadata files for code-review and commit-commands plugins
to comply with Claude plugin structure requirements. These files were
identified as missing by the [claudelint](https://github.com/stbenjam/claudelint)
tool, which validates plugin structure and format according to the Claude
Code plugin conventions.
2025-10-27 12:49:45 -04:00
Ashwin Bhat
52fea66ba5 Update code-review.md (#10358) 2025-10-25 21:58:21 -07:00
Wanghong Yuan
4e417747c5 Merge pull request #10227 from anthropics/add-code-review-plugin
Add code-review plugin for automated PR reviews
2025-10-24 15:38:33 -07:00
GitHub Actions
1b41969c71 chore: Update CHANGELOG.md 2025-10-24 22:04:57 +00:00
Wanghong Yuan MacBook
e9af4d7c1d rm gh api 2025-10-24 13:08:13 -07:00
Wanghong Yuan MacBook
48a8bfc2b1 Add code-review plugin to marketplace.json
Added marketplace entry for the code-review plugin, which provides automated PR review using multiple specialized agents with confidence-based scoring.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-23 23:35:40 -07:00
Wanghong Yuan MacBook
546f0b46ac Add code-review plugin with automated PR review workflow
Add new code-review plugin that provides automated pull request reviews using multiple specialized agents with confidence-based scoring to filter false positives.

Key features:
- Multiple parallel agents for independent auditing (CLAUDE.md compliance, bug detection, historical context)
- Confidence-based scoring (0-100) with 80+ threshold to filter false positives
- Automatic skipping of closed, draft, or already-reviewed PRs
- Links directly to code with full SHA and line ranges

Updates:
- Add code-review plugin directory with command and README
- Update plugins/README.md to document the new plugin

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-23 23:11:44 -07:00
GitHub Actions
3be7215354 chore: Update CHANGELOG.md 2025-10-21 21:34:34 +00:00
Cat Wu
5d0e5cf15f Add oncall triage slash command for issue management
Creates a new /oncall-triage command that automates the process of triaging GitHub issues and labeling critical ones for oncall attention.

The command:
- Fetches open bugs updated in last 3 days with 5+ engagements
- Systematically evaluates each issue for blocking severity
- Adds "oncall" label to truly blocking issues
- Provides summary of all issues that received the label

Includes guidance to use individual gh commands instead of bash loops to avoid approval prompts.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-21 13:44:18 -07:00
Catherine Wu
71bb75e3b5 Merge pull request #10023 from anthropics/add-oncall-triage-workflow
Add automated oncall triage workflow
2025-10-21 09:49:49 -07:00
Catherine Wu
1b827ad951 Merge pull request #10024 from anthropics/claude/investigate-workflow-config-011CUKnhpbage9BLno96Adg1
feat: upgrade GitHub workflows to use Claude Sonnet 4.5
2025-10-21 09:49:40 -07:00
Cat Wu
113ea425ac Add automated oncall triage workflow
Implements a GitHub Actions workflow that automatically identifies and labels critical blocking issues requiring oncall attention.

Features:
- Runs every 6 hours via cron schedule
- Fetches open issues updated in the last 3 days (5 per page to avoid context overflow)
- Orders by UPDATED_AT DESC and stops when hitting issues older than 3 days
- Evaluates each issue for bug status, engagement level (5+), and blocking severity
- Uses LLM comprehension to determine true blocking impact, not keyword matching
- Applies "oncall" label to qualifying issues via GitHub MCP tools
- Provides detailed summary including processed count, labeled issues, and close calls

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-20 23:13:36 -07:00
Claude
70cb0d1016 feat: upgrade GitHub workflows to use Claude Sonnet 4.5
Update all Claude Code GitHub Action workflows to use the latest Sonnet 4.5 model (claude-sonnet-4-5-20250929) instead of the default Sonnet 4.0 model. This provides improved performance and capabilities for:
- Issue commenting and PR reviews (claude.yml)
- Automated issue triage (claude-issue-triage.yml)
- Duplicate issue detection (claude-dedupe-issues.yml)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-21 05:30:35 +00:00
GitHub Actions
ff0aafa946 chore: Update CHANGELOG.md 2025-10-20 22:56:31 +00:00
GitHub Actions
fb9d9169a0 chore: Update CHANGELOG.md 2025-10-18 00:27:28 +00:00
Ashwin Bhat
cabd74fa9e docs: Add comprehensive plugin documentation (#9797)
- Add plugins section to main README with link to detailed docs
- Create plugins/README.md with overview of all available plugins
- Add detailed READMEs for agent-sdk-dev, commit-commands, and feature-dev plugins
- Document all commands, agents, usage patterns, and workflows

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-17 16:44:48 -07:00
23 changed files with 955 additions and 22 deletions

View File

@@ -57,6 +57,39 @@
},
"source": "./plugins/security-guidance",
"category": "security"
},
{
"name": "code-review",
"description": "Automated code review for pull requests using multiple specialized agents with confidence-based scoring to filter false positives",
"version": "1.0.0",
"author": {
"name": "Boris Cherny",
"email": "boris@anthropic.com"
},
"source": "./plugins/code-review",
"category": "productivity"
},
{
"name": "explanatory-output-style",
"description": "Adds educational insights about implementation choices and codebase patterns (mimics the deprecated Explanatory output style)",
"version": "1.0.0",
"author": {
"name": "Dickson Tsai",
"email": "dickson@anthropic.com"
},
"source": "./plugins/explanatory-output-style",
"category": "learning"
},
{
"name": "learning-output-style",
"description": "Interactive learning mode that requests meaningful code contributions at decision points (mimics the unshipped Learning output style)",
"version": "1.0.0",
"author": {
"name": "Boris Cherny",
"email": "boris@anthropic.com"
},
"source": "./plugins/learning-output-style",
"category": "learning"
}
]
}

View File

@@ -1,5 +1,5 @@
---
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh api:*), Bash(gh issue comment:*)
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(./scripts/comment-on-duplicates.sh:*)
description: Find duplicate GitHub issues
---
@@ -11,28 +11,13 @@ To do this, follow these steps precisely:
2. Use an agent to view a Github issue, and ask the agent to return a summary of the issue
3. Then, launch 5 parallel agents to search Github for duplicates of this issue, using diverse keywords and search approaches, using the summary from #1
4. Next, feed the results from #1 and #2 into another agent, so that it can filter out false positives, that are likely not actually duplicates of the original issue. If there are no duplicates remaining, do not proceed.
5. Finally, comment back on the issue with a list of up to three duplicate issues (or zero, if there are no likely duplicates)
5. Finally, use the comment script to post duplicates:
```
./scripts/comment-on-duplicates.sh --base-issue <issue-number> --potential-duplicates <dup1> <dup2> <dup3>
```
Notes (be sure to tell this to your agents, too):
- Use `gh` to interact with Github, rather than web fetch
- Do not use other tools, beyond `gh` (eg. don't use other MCP servers, file edit, etc.)
- Do not use other tools, beyond `gh` and the comment script (eg. don't use other MCP servers, file edit, etc.)
- Make a todo list first
- For your comment, follow the following format precisely (assuming for this example that you found 3 suspected duplicates):
---
Found 3 possible duplicate issues:
1. <link to issue>
2. <link to issue>
3. <link to issue>
This issue will be automatically closed as a duplicate in 3 days.
- If your issue is a duplicate, please close it and 👍 the existing issue instead
- To prevent auto-closure, add a comment or 👎 this comment
🤖 Generated with [Claude Code](https://claude.ai/code)
---

View File

@@ -0,0 +1,40 @@
---
allowed-tools: Bash(gh issue list:*), Bash(gh issue view:*), Bash(gh issue edit:*), TodoWrite
description: Triage GitHub issues and label critical ones for oncall
---
You're an oncall triage assistant for GitHub issues. Your task is to identify critical issues that require immediate oncall attention and apply the "oncall" label.
Repository: anthropics/claude-code
Task overview:
1. First, get all open bugs updated in the last 3 days with at least 50 engagements:
```bash
gh issue list --repo anthropics/claude-code --state open --label bug --limit 1000 --json number,title,updatedAt,comments,reactions | jq -r '.[] | select((.updatedAt >= (now - 259200 | strftime("%Y-%m-%dT%H:%M:%SZ"))) and ((.comments | length) + ([.reactions[].content] | length) >= 50)) | "\(.number)"'
```
2. Save the list of issue numbers and create a TODO list with ALL of them. This ensures you process every single one.
3. For each issue in your TODO list:
- Use `gh issue view <number> --repo anthropics/claude-code --json title,body,labels,comments` to get full details
- Read and understand the full issue content and comments to determine actual user impact
- Evaluate: Is this truly blocking users from using Claude Code?
- Consider: "crash", "stuck", "frozen", "hang", "unresponsive", "cannot use", "blocked", "broken"
- Does it prevent core functionality? Can users work around it?
- Be conservative - only flag issues that truly prevent users from getting work done
4. For issues that are truly blocking and don't already have the "oncall" label:
- Use `gh issue edit <number> --repo anthropics/claude-code --add-label "oncall"`
- Mark the issue as complete in your TODO list
5. After processing all issues, provide a summary:
- List each issue number that received the "oncall" label
- Include the issue title and brief reason why it qualified
- If no issues qualified, state that clearly
Important:
- Process ALL issues in your TODO list systematically
- Don't post any comments to issues
- Only add the "oncall" label, never remove it
- Use individual `gh issue view` commands instead of bash for loops to avoid approval prompts

View File

@@ -27,6 +27,7 @@ jobs:
with:
prompt: "/dedupe ${{ github.repository }}/issues/${{ github.event.issue.number || inputs.issue_number }}"
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: "--model claude-sonnet-4-5-20250929"
claude_env: |
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -102,5 +102,6 @@ jobs:
timeout_minutes: "5"
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
mcp_config: /tmp/mcp-config/mcp-servers.json
claude_args: "--model claude-sonnet-4-5-20250929"
claude_env: |
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -34,4 +34,5 @@ jobs:
uses: anthropics/claude-code-action@beta
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: "--model claude-sonnet-4-5-20250929"

120
.github/workflows/oncall-triage.yml vendored Normal file
View File

@@ -0,0 +1,120 @@
name: Oncall Issue Triage
description: Automatically identify and label critical blocking issues requiring oncall attention
on:
push:
branches:
- add-oncall-triage-workflow # Temporary: for testing only
schedule:
# Run every 6 hours
- cron: '0 */6 * * *'
workflow_dispatch: # Allow manual trigger
jobs:
oncall-triage:
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
issues: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Create oncall triage prompt
run: |
mkdir -p /tmp/claude-prompts
cat > /tmp/claude-prompts/oncall-triage-prompt.txt << 'EOF'
You're an oncall triage assistant for GitHub issues. Your task is to identify critical issues that require immediate oncall attention.
Important: Don't post any comments or messages to the issues. Your only action should be to apply the "oncall" label to qualifying issues.
Repository: ${{ github.repository }}
Task overview:
1. Fetch all open issues updated in the last 3 days:
- Use mcp__github__list_issues with:
- state="open"
- first=5 (fetch only 5 issues per page)
- orderBy="UPDATED_AT"
- direction="DESC"
- This will give you the most recently updated issues first
- For each page of results, check the updatedAt timestamp of each issue
- Add issues updated within the last 3 days (72 hours) to your TODO list as you go
- Keep paginating using the 'after' parameter until you encounter issues older than 3 days
- Once you hit issues older than 3 days, you can stop fetching (no need to fetch all open issues)
2. Build your TODO list incrementally as you fetch:
- As you fetch each page, immediately add qualifying issues to your TODO list
- One TODO item per issue number (e.g., "Evaluate issue #123")
- This allows you to start processing while still fetching more pages
3. For each issue in your TODO list:
- Use mcp__github__get_issue to read the issue details (title, body, labels)
- Use mcp__github__get_issue_comments to read all comments
- Evaluate whether this issue needs the oncall label:
a) Is it a bug? (has "bug" label or describes bug behavior)
b) Does it have at least 50 engagements? (count comments + reactions)
c) Is it truly blocking? Read and understand the full content to determine:
- Does this prevent core functionality from working?
- Can users work around it?
- Consider severity indicators: "crash", "stuck", "frozen", "hang", "unresponsive", "cannot use", "blocked", "broken"
- Be conservative - only flag issues that truly prevent users from getting work done
4. For issues that meet all criteria and do not already have the "oncall" label:
- Use mcp__github__update_issue to add the "oncall" label
- Do not post any comments
- Do not remove any existing labels
- Do not remove the "oncall" label from issues that already have it
Important guidelines:
- Use the TODO list to track your progress through ALL candidate issues
- Process issues efficiently - don't read every single issue upfront, work through your TODO list systematically
- Be conservative in your assessment - only flag truly critical blocking issues
- Do not post any comments to issues
- Your only action should be to add the "oncall" label using mcp__github__update_issue
- Mark each issue as complete in your TODO list as you process it
7. After processing all issues in your TODO list, provide a summary of your actions:
- Total number of issues processed (candidate issues evaluated)
- Number of issues that received the "oncall" label
- For each issue that got the label: list issue number, title, and brief reason why it qualified
- Close calls: List any issues that almost qualified but didn't quite meet the criteria (e.g., borderline blocking, had workarounds)
- If no issues qualified, state that clearly
- Format the summary clearly for easy reading
EOF
- name: Setup GitHub MCP Server
run: |
mkdir -p /tmp/mcp-config
cat > /tmp/mcp-config/mcp-servers.json << 'EOF'
{
"mcpServers": {
"github": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"GITHUB_PERSONAL_ACCESS_TOKEN",
"ghcr.io/github/github-mcp-server:sha-7aced2b"
],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
}
}
}
}
EOF
- name: Run Claude Code for Oncall Triage
uses: anthropics/claude-code-base-action@beta
with:
prompt_file: /tmp/claude-prompts/oncall-triage-prompt.txt
allowed_tools: "mcp__github__list_issues,mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue"
timeout_minutes: "10"
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
mcp_config: /tmp/mcp-config/mcp-servers.json
claude_env: |
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

2
.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
.DS_Store

View File

@@ -1,12 +1,73 @@
# Changelog
## 2.0.31
- Windows: native installation uses shift+tab as shortcut for mode switching, instead of alt+m
- Vertex: add support for Web Search on supported models
- VSCode: Adding the respectGitIgnore configuration to include .gitignored files in file searches (defaults to true)
- Fixed a bug with subagents and MCP servers related to "Tool names must be unique" error
- Fixed issue causing `/compact` to fail with `prompt_too_long` by making it respect existing compact boundaries
- Fixed plugin uninstall not removing plugins
## 2.0.30
- Added helpful hint to run `security unlock-keychain` when encountering API key errors on macOS with locked keychain
- Added `allowUnsandboxedCommands` sandbox setting to disable the dangerouslyDisableSandbox escape hatch at policy level
- Added `disallowedTools` field to custom agent definitions for explicit tool blocking
- Added prompt-based stop hooks
- VSCode: Added respectGitIgnore configuration to include .gitignored files in file searches (defaults to true)
- Enabled SSE MCP servers on native build
- Deprecated output styles. Review options in `/output-style` and use --system-prompt-file, --system-prompt, --append-system-prompt, CLAUDE.md, or plugins instead
- Removed support for custom ripgrep configuration, resolving an issue where Search returns no results and config discovery fails
- Fixed Explore agent creating unwanted .md investigation files during codebase exploration
- Fixed a bug where `/context` would sometimes fail with "max_tokens must be greater than thinking.budget_tokens" error message
- Fixed `--mcp-config` flag to correctly override file-based MCP configurations
- Fixed bug that saved session permissions to local settings
- Fixed MCP tools not being available to sub-agents
- Fixed hooks and plugins not executing when using --dangerously-skip-permissions flag
- Fixed delay when navigating through typeahead suggestions with arrow keys
- VSCode: Restored selection indicator in input footer showing current file or code selection status
## 2.0.28
- Plan mode: introduced new Plan subagent
- Subagents: claude can now choose to resume subagents
- Subagents: claude can dynamically choose the model used by its subagents
- SDK: added --max-budget-usd flag
- Discovery of custom slash commands, subagents, and output styles no longer respects .gitignore
- Stop `/terminal-setup` from adding backslash to `Shift + Enter` in VS Code
- Add branch and tag support for git-based plugins and marketplaces using fragment syntax (e.g., `owner/repo#branch`)
- Fixed a bug where macOS permission prompts would show up upon initial launch when launching from home directory
- Various other bug fixes
## 2.0.27
- New UI for permission prompts
- Added current branch filtering and search to session resume screen for easier navigation
- Fixed directory @-mention causing "No assistant message found" error
- VSCode Extension: Add config setting to include .gitignored files in file searches
- VSCode Extension: Bug fixes for unrelated 'Warmup' conversations, and configuration/settings occasionally being reset to defaults
## 2.0.25
- Removed legacy SDK entrypoint. Please migrate to @anthropic-ai/claude-agent-sdk for future SDK updates: https://docs.claude.com/en/docs/claude-code/sdk/migration-guide
## 2.0.24
- Fixed a bug where project-level skills were not loading when --setting-sources 'project' was specified
- Claude Code Web: Support for Web -> CLI teleport
- Sandbox: Releasing a sandbox mode for the BashTool on Linux & Mac
- Bedrock: Display awsAuthRefresh output when auth is required
## 2.0.22
- Edit the plan in your text editor using CTRL+G
- Fixed content layout shift when scrolling through slash commands
- IDE: Add toggle to enable/disable thinking.
- Fix bug causing duplicate permission prompts with parallel tool calls
- Add support for enterprise managed MCP allowlist and denylist
## 2.0.21
- Support MCP `structuredContent` field in tool responses
- Added an interactive question tool
- Claude will now ask you questions more often in plan mode

View File

@@ -32,6 +32,16 @@ Simplifies common git operations with streamlined commands for committing, pushi
- `/clean_gone` - Clean up stale local branches marked as [gone]
- **Use case**: Faster git workflows with less context switching
### [code-review](./code-review/)
**Automated Pull Request Code Review Plugin**
Provides automated code review for pull requests using multiple specialized agents with confidence-based scoring to filter false positives.
- **Command**:
- `/code-review` - Automated PR review workflow
- **Use case**: Automated code review on pull requests with high-confidence issue detection (threshold ≥80)
### [feature-dev](./feature-dev/)
**Comprehensive Feature Development Workflow Plugin**

View File

@@ -0,0 +1,10 @@
{
"name": "code-review",
"description": "Automated code review for pull requests using multiple specialized agents with confidence-based scoring",
"version": "1.0.0",
"author": {
"name": "Boris Cherny",
"email": "boris@anthropic.com"
}
}

View File

@@ -0,0 +1,246 @@
# Code Review Plugin
Automated code review for pull requests using multiple specialized agents with confidence-based scoring to filter false positives.
## Overview
The Code Review Plugin automates pull request review by launching multiple agents in parallel to independently audit changes from different perspectives. It uses confidence scoring to filter out false positives, ensuring only high-quality, actionable feedback is posted.
## Commands
### `/code-review`
Performs automated code review on a pull request using multiple specialized agents.
**What it does:**
1. Checks if review is needed (skips closed, draft, trivial, or already-reviewed PRs)
2. Gathers relevant CLAUDE.md guideline files from the repository
3. Summarizes the pull request changes
4. Launches 4 parallel agents to independently review:
- **Agents #1 & #2**: Audit for CLAUDE.md compliance
- **Agent #3**: Scan for obvious bugs in changes
- **Agent #4**: Analyze git blame/history for context-based issues
5. Scores each issue 0-100 for confidence level
6. Filters out issues below 80 confidence threshold
7. Posts review comment with high-confidence issues only
**Usage:**
```bash
/code-review
```
**Example workflow:**
```bash
# On a PR branch, run:
/code-review
# Claude will:
# - Launch 4 review agents in parallel
# - Score each issue for confidence
# - Post comment with issues ≥80 confidence
# - Skip posting if no high-confidence issues found
```
**Features:**
- Multiple independent agents for comprehensive review
- Confidence-based scoring reduces false positives (threshold: 80)
- CLAUDE.md compliance checking with explicit guideline verification
- Bug detection focused on changes (not pre-existing issues)
- Historical context analysis via git blame
- Automatic skipping of closed, draft, or already-reviewed PRs
- Links directly to code with full SHA and line ranges
**Review comment format:**
```markdown
## Code review
Found 3 issues:
1. Missing error handling for OAuth callback (CLAUDE.md says "Always handle OAuth errors")
https://github.com/owner/repo/blob/abc123.../src/auth.ts#L67-L72
2. Memory leak: OAuth state not cleaned up (bug due to missing cleanup in finally block)
https://github.com/owner/repo/blob/abc123.../src/auth.ts#L88-L95
3. Inconsistent naming pattern (src/conventions/CLAUDE.md says "Use camelCase for functions")
https://github.com/owner/repo/blob/abc123.../src/utils.ts#L23-L28
```
**Confidence scoring:**
- **0**: Not confident, false positive
- **25**: Somewhat confident, might be real
- **50**: Moderately confident, real but minor
- **75**: Highly confident, real and important
- **100**: Absolutely certain, definitely real
**False positives filtered:**
- Pre-existing issues not introduced in PR
- Code that looks like a bug but isn't
- Pedantic nitpicks
- Issues linters will catch
- General quality issues (unless in CLAUDE.md)
- Issues with lint ignore comments
## Installation
This plugin is included in the Claude Code repository. The command is automatically available when using Claude Code.
## Best Practices
### Using `/code-review`
- Maintain clear CLAUDE.md files for better compliance checking
- Trust the 80+ confidence threshold - false positives are filtered
- Run on all non-trivial pull requests
- Review agent findings as a starting point for human review
- Update CLAUDE.md based on recurring review patterns
### When to use
- All pull requests with meaningful changes
- PRs touching critical code paths
- PRs from multiple contributors
- PRs where guideline compliance matters
### When not to use
- Closed or draft PRs (automatically skipped anyway)
- Trivial automated PRs (automatically skipped)
- Urgent hotfixes requiring immediate merge
- PRs already reviewed (automatically skipped)
## Workflow Integration
### Standard PR review workflow:
```bash
# Create PR with changes
/code-review
# Review the automated feedback
# Make any necessary fixes
# Merge when ready
```
### As part of CI/CD:
```bash
# Trigger on PR creation or update
# Automatically posts review comments
# Skip if review already exists
```
## Requirements
- Git repository with GitHub integration
- GitHub CLI (`gh`) installed and authenticated
- CLAUDE.md files (optional but recommended for guideline checking)
## Troubleshooting
### Review takes too long
**Issue**: Agents are slow on large PRs
**Solution**:
- Normal for large changes - agents run in parallel
- 4 independent agents ensure thoroughness
- Consider splitting large PRs into smaller ones
### Too many false positives
**Issue**: Review flags issues that aren't real
**Solution**:
- Default threshold is 80 (already filters most false positives)
- Make CLAUDE.md more specific about what matters
- Consider if the flagged issue is actually valid
### No review comment posted
**Issue**: `/code-review` runs but no comment appears
**Solution**:
Check if:
- PR is closed (reviews skipped)
- PR is draft (reviews skipped)
- PR is trivial/automated (reviews skipped)
- PR already has review (reviews skipped)
- No issues scored ≥80 (no comment needed)
### Link formatting broken
**Issue**: Code links don't render correctly in GitHub
**Solution**:
Links must follow this exact format:
```
https://github.com/owner/repo/blob/[full-sha]/path/file.ext#L[start]-L[end]
```
- Must use full SHA (not abbreviated)
- Must use `#L` notation
- Must include line range with at least 1 line of context
### GitHub CLI not working
**Issue**: `gh` commands fail
**Solution**:
- Install GitHub CLI: `brew install gh` (macOS) or see [GitHub CLI installation](https://cli.github.com/)
- Authenticate: `gh auth login`
- Verify repository has GitHub remote
## Tips
- **Write specific CLAUDE.md files**: Clear guidelines = better reviews
- **Include context in PRs**: Helps agents understand intent
- **Use confidence scores**: Issues ≥80 are usually correct
- **Iterate on guidelines**: Update CLAUDE.md based on patterns
- **Review automatically**: Set up as part of PR workflow
- **Trust the filtering**: Threshold prevents noise
## Configuration
### Adjusting confidence threshold
The default threshold is 80. To adjust, modify the command file at `commands/code-review.md`:
```markdown
Filter out any issues with a score less than 80.
```
Change `80` to your preferred threshold (0-100).
### Customizing review focus
Edit `commands/code-review.md` to add or modify agent tasks:
- Add security-focused agents
- Add performance analysis agents
- Add accessibility checking agents
- Add documentation quality checks
## Technical Details
### Agent architecture
- **2x CLAUDE.md compliance agents**: Redundancy for guideline checks
- **1x bug detector**: Focused on obvious bugs in changes only
- **1x history analyzer**: Context from git blame and history
- **Nx confidence scorers**: One per issue for independent scoring
### Scoring system
- Each issue independently scored 0-100
- Scoring considers evidence strength and verification
- Threshold (default 80) filters low-confidence issues
- For CLAUDE.md issues: verifies guideline explicitly mentions it
### GitHub integration
Uses `gh` CLI for:
- Viewing PR details and diffs
- Fetching repository data
- Reading git blame and history
- Posting review comments
## Author
Boris Cherny (boris@anthropic.com)
## Version
1.0.0

View File

@@ -0,0 +1,84 @@
---
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*), Bash(gh pr review:*), Bash(gh pr list:*)
description: Code review a pull request
---
Provide a code review for the given pull request.
To do this, follow these steps precisely:
1. Use an agent to check if the pull request (a) is closed, (b) is a draft, (c) does not need a code review (eg. because it is an automated pull request, or is very simple and obviously ok), or (d) already has a code review from you from earlier. If so, do not proceed.
2. Use another agent to give you a list of file paths to (but not the contents of) any relevant CLAUDE.md files from the codebase: the root CLAUDE.md file (if one exists), as well as any CLAUDE.md files in the directories whose files the pull request modified
3. Use an agent to view the pull request, and ask the agent to return a summary of the change
4. Then, launch 4 parallel agents to independently code review the change. The agents should do the following, then return a list of issues and the reason each issue was flagged (eg. CLAUDE.md adherence, bug, historical git context, etc.):
a. Agents #1 and #2: Independently audit the changes to make sure they compily with the CLAUDE.md
b. Agent #3: Read the file changes in the pull request, then do a shallow scan for obvious bugs. Avoid reading extra context beyond the changes, focusing just on the changes themselves. Focus on large bugs, and avoid small issues and nitpicks. Ignore likely false positives.
c. Agent #5: Read the git blame and history of the code modified, to identify any bugs in light of that historical context
5. For each issue found in #4, launch a parallel agent that takes the PR, issue description, and list of CLAUDE.md files (from step 2), and returns a score to indicate the agent's level of confidence for whether the issue is real or false positive. To do that, the agent should score each issue on a scale from 0-100, indicating its level of confidence. For issues that were flagged due to CLAUDE.md instructions, the agent should double check that the CLAUDE.md actually calls out that issue specifically. The scale is (give this rubric to the agent verbatim):
a. 0: Not confident at all. This is a false positive that doesn't stand up to light scrutiny, or is a pre-existing issue.
b. 25: Somewhat confident. This might be a real issue, but may also be a false positive. The agent wasn't able to verify that it's a real issue. If the issue is stylistic, it is one that was not explicitly called out in the relevant CLAUDE.md.
c. 50: Moderately confident. The agent was able to verify this is a real issue, but it might be a nitpick or not happen very often in practice. Relative to the rest of the PR, it's not very important.
d. 75: Highly confident. The agent double checked the issue, and verified that it is very likely it is a real issue that will be hit in practice. The existing approach in the PR is insufficient. The issue is very important and will directly impact the code's functionality, or it is an issue that is directly mentioned in the relevant CLAUDE.md.
e. 100: Absolutely certain. The agent double checked the issue, and confirmed that it is definitely a real issue, that will happen frequently in practice. The evidence directly confirms this.
6. Filter out any issues with a score less than 80. If there are no issues that meet this criteria, do not proceed.
7. Finally, comment back on the pull request with a list of issues you found. When writing your comment, keep in mind to:
a. Keep your output brief
b. Avoid emojis
c. Link and cite relevant code, files, and URLs
Examples of false positives, for steps 4 and 5:
- Pre-existing issues
- Something that looks like a bug but is not actually a bug
- Pedantic nitpicks that a senior engineer wouldn't call out
- Issues that a linter will catch (no need to run the linter to verify)
- General code quality issues (eg. lack of test coverage, general security issues), unless explicitly required in CLAUDE.md
- Issues that are called out in CLAUDE.md, but explicitly silenced in the code (eg. due to a lint ignore comment)
Notes:
- Use `gh` to interact with Github (eg. to fetch a pull request, or to create inline comments), rather than web fetch
- Make a todo list first
- You must cite and link each bug (eg. if referring to a CLAUDE.md, you must link it)
- For your comment, follow the following format precisely (assuming for this example that you found 3 issues):
---
## Code review
Found 3 issues:
1. <brief description of bug> (CLAUDE.md says "<...>")
<link to file and line with full sha1 + line range for context, eg. https://github.com/anthropics/claude-code/blob/1d54823877c4de72b2316a64032a54afc404e619/README.md#L13-L17>
2. <brief description of bug> (some/other/CLAUDE.md says "<...>")
<link to file and line with full sha1 + line range for context>
3. <brief description of bug> (bug due to <file and code snippet>)
<link to file and line with full sha1 + line range for context>
🤖 Generated with [Claude Code](https://claude.ai/code)
<sub>- If this code review was useful, please react with 👍. Otherwise, react with 👎.</sub>
---
- Or, if you found no issues:
---
## Auto code review
No issues found. Checked for bugs and CLAUDE.md compliance.
## 🤖 Generated with [Claude Code](https://claude.ai/code)
- When linking to code, follow the following format precisely, otherwise the Markdown preview won't render correctly: https://github.com/anthropics/claude-cli-internal/blob/c21d3c10bc8e898b7ac1a2d745bdc9bc4e423afe/package.json#L10-L15
- Requires full git sha
- Repo name must match the repo you're code reviewing
- # sign after the file name
- Line range format is L[start]-L[end]
- Provide at least 1 line of context before and after, centered on the line you are commenting about (eg. if you are commenting about lines 5-6, you should link to `L4-7`)

View File

@@ -0,0 +1,10 @@
{
"name": "commit-commands",
"description": "Streamline your git workflow with simple commands for committing, pushing, and creating pull requests",
"version": "1.0.0",
"author": {
"name": "Anthropic",
"email": "support@anthropic.com"
}
}

View File

@@ -0,0 +1,9 @@
{
"name": "explanatory-output-style",
"version": "1.0.0",
"description": "Adds educational insights about implementation choices and codebase patterns (mimics the deprecated Explanatory output style)",
"author": {
"name": "Dickson Tsai",
"email": "dickson@anthropic.com"
}
}

View File

@@ -0,0 +1,72 @@
# Explanatory Output Style Plugin
This plugin recreates the deprecated Explanatory output style as a SessionStart
hook.
WARNING: Do not install this plugin unless you are fine with incurring the token
cost of this plugin's additional instructions and output.
## What it does
When enabled, this plugin automatically adds instructions at the start of each
session that encourage Claude to:
1. Provide educational insights about implementation choices
2. Explain codebase patterns and decisions
3. Balance task completion with learning opportunities
## How it works
The plugin uses a SessionStart hook to inject additional context into every
session. This context instructs Claude to provide brief educational explanations
before and after writing code, formatted as:
```
`★ Insight ─────────────────────────────────────`
[2-3 key educational points]
`─────────────────────────────────────────────────`
```
## Usage
Once installed, the plugin activates automatically at the start of every
session. No additional configuration is needed.
The insights focus on:
- Specific implementation choices for your codebase
- Patterns and conventions in your code
- Trade-offs and design decisions
- Codebase-specific details rather than general programming concepts
## Migration from Output Styles
This plugin replaces the deprecated "Explanatory" output style setting. If you
previously used:
```json
{
"outputStyle": "Explanatory"
}
```
You can now achieve the same behavior by installing this plugin instead.
More generally, this SessionStart hook pattern is roughly equivalent to
CLAUDE.md, but it is more flexible and allows for distribution through plugins.
Note: Output styles that involve tasks besides software development, are better
expressed as
[subagents](https://docs.claude.com/en/docs/claude-code/sub-agents), not as
SessionStart hooks. Subagents change the system prompt while SessionStart hooks
add to the default system prompt.
## Managing changes
- Disable the plugin - keep the code installed on your device
- Uninstall the plugin - remove the code from your device
- Update the plugin - create a local copy of this plugin to personalize this
plugin
- Hint: Ask Claude to read
https://docs.claude.com/en/docs/claude-code/plugins.md and set it up for
you!

View File

@@ -0,0 +1,15 @@
#!/bin/bash
# Output the explanatory mode instructions as additionalContext
# This mimics the deprecated Explanatory output style
cat << 'EOF'
{
"hookSpecificOutput": {
"hookEventName": "SessionStart",
"additionalContext": "You are in 'explanatory' output style mode, where you should provide educational insights about the codebase as you help with the user's task.\n\nYou should be clear and educational, providing helpful explanations while remaining focused on the task. Balance educational content with task completion. When providing insights, you may exceed typical length constraints, but remain focused and relevant.\n\n## Insights\nIn order to encourage learning, before and after writing code, always provide brief educational explanations about implementation choices using (with backticks):\n\"`★ Insight ─────────────────────────────────────`\n[2-3 key educational points]\n`─────────────────────────────────────────────────`\"\n\nThese insights should be included in the conversation, not in the codebase. You should generally focus on interesting insights that are specific to the codebase or the code you just wrote, rather than general programming concepts. Do not wait until the end to provide insights. Provide them as you write code."
}
}
EOF
exit 0

View File

@@ -0,0 +1,15 @@
{
"description": "Explanatory mode hook that adds educational insights instructions",
"hooks": {
"SessionStart": [
{
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks-handlers/session-start.sh"
}
]
}
]
}
}

View File

@@ -0,0 +1,9 @@
{
"name": "learning-output-style",
"version": "1.0.0",
"description": "Interactive learning mode that requests meaningful code contributions at decision points (mimics the unshipped Learning output style)",
"author": {
"name": "Boris Cherny",
"email": "boris@anthropic.com"
}
}

View File

@@ -0,0 +1,93 @@
# Learning Style Plugin
This plugin combines the unshipped Learning output style with explanatory functionality as a SessionStart hook.
**Note:** This plugin differs from the original unshipped Learning output style by also incorporating all functionality from the [explanatory-output-style plugin](https://github.com/anthropics/claude-code/tree/main/plugins/explanatory-output-style), providing both interactive learning and educational insights.
WARNING: Do not install this plugin unless you are fine with incurring the token cost of this plugin's additional instructions and the interactive nature of learning mode.
## What it does
When enabled, this plugin automatically adds instructions at the start of each session that encourage Claude to:
1. **Learning Mode:** Engage you in active learning by requesting meaningful code contributions at decision points
2. **Explanatory Mode:** Provide educational insights about implementation choices and codebase patterns
Instead of implementing everything automatically, Claude will:
1. Identify opportunities where you can write 5-10 lines of meaningful code
2. Focus on business logic and design choices where your input truly matters
3. Prepare the context and location for your contribution
4. Explain trade-offs and guide your implementation
5. Provide educational insights before and after writing code
## How it works
The plugin uses a SessionStart hook to inject additional context into every session. This context instructs Claude to adopt an interactive teaching approach where you actively participate in writing key parts of the code.
## When Claude requests contributions
Claude will ask you to write code for:
- Business logic with multiple valid approaches
- Error handling strategies
- Algorithm implementation choices
- Data structure decisions
- User experience decisions
- Design patterns and architecture choices
## When Claude won't request contributions
Claude will implement directly:
- Boilerplate or repetitive code
- Obvious implementations with no meaningful choices
- Configuration or setup code
- Simple CRUD operations
## Example interaction
**Claude:** I've set up the authentication middleware. The session timeout behavior is a security vs. UX trade-off - should sessions auto-extend on activity, or have a hard timeout?
In `auth/middleware.ts`, implement the `handleSessionTimeout()` function to define the timeout behavior.
Consider: auto-extending improves UX but may leave sessions open longer; hard timeouts are more secure but might frustrate active users.
**You:** [Write 5-10 lines implementing your preferred approach]
## Educational insights
In addition to interactive learning, Claude will provide educational insights about implementation choices using this format:
```
`★ Insight ─────────────────────────────────────`
[2-3 key educational points about the codebase or implementation]
`─────────────────────────────────────────────────`
```
These insights focus on:
- Specific implementation choices for your codebase
- Patterns and conventions in your code
- Trade-offs and design decisions
- Codebase-specific details rather than general programming concepts
## Usage
Once installed, the plugin activates automatically at the start of every session. No additional configuration is needed.
## Migration from Output Styles
This plugin combines the unshipped "Learning" output style with the deprecated "Explanatory" output style. It provides an interactive learning experience where you actively contribute code at meaningful decision points, while also receiving educational insights about implementation choices.
If you previously used the explanatory-output-style plugin, this learning plugin includes all of that functionality plus interactive learning features.
This SessionStart hook pattern is roughly equivalent to CLAUDE.md, but it is more flexible and allows for distribution through plugins.
## Managing changes
- Disable the plugin - keep the code installed on your device
- Uninstall the plugin - remove the code from your device
- Update the plugin - create a local copy of this plugin to personalize it
- Hint: Ask Claude to read https://docs.claude.com/en/docs/claude-code/plugins.md and set it up for you!
## Philosophy
Learning by doing is more effective than passive observation. This plugin transforms your interaction with Claude from "watch and learn" to "build and understand," ensuring you develop practical skills through hands-on coding of meaningful logic.

View File

@@ -0,0 +1,15 @@
#!/bin/bash
# Output the learning mode instructions as additionalContext
# This combines the unshipped Learning output style with explanatory functionality
cat << 'EOF'
{
"hookSpecificOutput": {
"hookEventName": "SessionStart",
"additionalContext": "You are in 'learning' output style mode, which combines interactive learning with educational explanations. This mode differs from the original unshipped Learning output style by also incorporating explanatory functionality.\n\n## Learning Mode Philosophy\n\nInstead of implementing everything yourself, identify opportunities where the user can write 5-10 lines of meaningful code that shapes the solution. Focus on business logic, design choices, and implementation strategies where their input truly matters.\n\n## When to Request User Contributions\n\nRequest code contributions for:\n- Business logic with multiple valid approaches\n- Error handling strategies\n- Algorithm implementation choices\n- Data structure decisions\n- User experience decisions\n- Design patterns and architecture choices\n\n## How to Request Contributions\n\nBefore requesting code:\n1. Create the file with surrounding context\n2. Add function signature with clear parameters/return type\n3. Include comments explaining the purpose\n4. Mark the location with TODO or clear placeholder\n\nWhen requesting:\n- Explain what you've built and WHY this decision matters\n- Reference the exact file and prepared location\n- Describe trade-offs to consider, constraints, or approaches\n- Frame it as valuable input that shapes the feature, not busy work\n- Keep requests focused (5-10 lines of code)\n\n## Example Request Pattern\n\nContext: I've set up the authentication middleware. The session timeout behavior is a security vs. UX trade-off - should sessions auto-extend on activity, or have a hard timeout? This affects both security posture and user experience.\n\nRequest: In auth/middleware.ts, implement the handleSessionTimeout() function to define the timeout behavior.\n\nGuidance: Consider: auto-extending improves UX but may leave sessions open longer; hard timeouts are more secure but might frustrate active users.\n\n## Balance\n\nDon't request contributions for:\n- Boilerplate or repetitive code\n- Obvious implementations with no meaningful choices\n- Configuration or setup code\n- Simple CRUD operations\n\nDo request contributions when:\n- There are meaningful trade-offs to consider\n- The decision shapes the feature's behavior\n- Multiple valid approaches exist\n- The user's domain knowledge would improve the solution\n\n## Explanatory Mode\n\nAdditionally, provide educational insights about the codebase as you help with tasks. Be clear and educational, providing helpful explanations while remaining focused on the task. Balance educational content with task completion.\n\n### Insights\nBefore and after writing code, provide brief educational explanations about implementation choices using:\n\n\"`★ Insight ─────────────────────────────────────`\n[2-3 key educational points]\n`─────────────────────────────────────────────────`\"\n\nThese insights should be included in the conversation, not in the codebase. Focus on interesting insights specific to the codebase or the code you just wrote, rather than general programming concepts. Provide insights as you write code, not just at the end."
}
}
EOF
exit 0

View File

@@ -0,0 +1,15 @@
{
"description": "Learning mode hook that adds interactive learning instructions",
"hooks": {
"SessionStart": [
{
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks-handlers/session-start.sh"
}
]
}
]
}
}

View File

@@ -0,0 +1,86 @@
#!/usr/bin/env bash
#
# Comments on a GitHub issue with a list of potential duplicates.
# Usage: ./comment-on-duplicates.sh --base-issue 123 --potential-duplicates 456 789 101
#
set -euo pipefail
REPO="anthropics/claude-code"
BASE_ISSUE=""
DUPLICATES=()
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
--base-issue)
BASE_ISSUE="$2"
shift 2
;;
--potential-duplicates)
shift
while [[ $# -gt 0 && ! "$1" =~ ^-- ]]; do
DUPLICATES+=("$1")
shift
done
;;
*)
echo "Unknown option: $1" >&2
exit 1
;;
esac
done
# Validate base issue
if [[ -z "$BASE_ISSUE" ]]; then
echo "Error: --base-issue is required" >&2
exit 1
fi
if ! [[ "$BASE_ISSUE" =~ ^[0-9]+$ ]]; then
echo "Error: --base-issue must be a number, got: $BASE_ISSUE" >&2
exit 1
fi
# Validate duplicates
if [[ ${#DUPLICATES[@]} -eq 0 ]]; then
echo "Error: --potential-duplicates requires at least one issue number" >&2
exit 1
fi
if [[ ${#DUPLICATES[@]} -gt 3 ]]; then
echo "Error: --potential-duplicates accepts at most 3 issues" >&2
exit 1
fi
for dup in "${DUPLICATES[@]}"; do
if ! [[ "$dup" =~ ^[0-9]+$ ]]; then
echo "Error: duplicate issue must be a number, got: $dup" >&2
exit 1
fi
done
# Build comment body
COUNT=${#DUPLICATES[@]}
if [[ $COUNT -eq 1 ]]; then
HEADER="Found 1 possible duplicate issue:"
else
HEADER="Found $COUNT possible duplicate issues:"
fi
BODY="$HEADER"$'\n\n'
INDEX=1
for dup in "${DUPLICATES[@]}"; do
BODY+="$INDEX. https://github.com/$REPO/issues/$dup"$'\n'
((INDEX++))
done
BODY+=$'\n'"This issue will be automatically closed as a duplicate in 3 days."$'\n\n'
BODY+="- If your issue is a duplicate, please close it and 👍 the existing issue instead"$'\n'
BODY+="- To prevent auto-closure, add a comment or 👎 this comment"$'\n\n'
BODY+="🤖 Generated with [Claude Code](https://claude.ai/code)"
# Post the comment
gh issue comment "$BASE_ISSUE" --repo "$REPO" --body "$BODY"
echo "Posted duplicate comment on issue #$BASE_ISSUE"