Compare commits

..

24 Commits

Author SHA1 Message Date
Claude
f48a6223ce Add integration tests for multi-hook scenarios in hookify
Create comprehensive pytest integration test suite for the hookify plugin:

- test_integration.py: Multi-hook evaluation, rule priority (blocking over warnings),
  condition AND logic, tool type field extraction, Stop/UserPromptSubmit events
- test_rule_loading.py: YAML frontmatter parsing, rule file loading, event filtering
- test_error_handling.py: Fault tolerance for missing files, invalid regex, malformed input

Also fix a bug discovered through testing: MultiEdit field extraction now
gracefully handles malformed edit entries (non-dict values in edits array).

68 tests covering:
- Multiple rules combining messages
- Blocking rules taking priority over warnings
- Multiple conditions with AND logic
- Different tool types (Bash, Write, Edit, MultiEdit)
- Stop event transcript checking
- UserPromptSubmit validation
- Tool matcher filtering
- Regex pattern matching and caching
- Error handling and edge cases

https://claude.ai/code/session_014B79JcfZHUaTfnThn3o3g2
2026-02-12 20:52:20 +00:00
Chris Lloyd
a93966285e Unify issue lifecycle labeling and sweep into a single system (#25352)
* Unify issue lifecycle labeling and sweep into a single system

Consolidate issue triage, stale detection, and lifecycle enforcement into
two components: a Claude-powered triage workflow and a unified sweep script.

Triage workflow changes:
- Add issue_comment trigger so Claude can re-evaluate lifecycle labels when
  someone responds to a needs-repro/needs-info issue
- Add concurrency group per issue with cancel-in-progress to avoid pile-up
- Filter out bot comments to prevent sweep/dedupe triggering re-triage
- Hardcode allowed label whitelist to prevent label sprawl (was discovering
  labels via gh label list, leading to junk variants like 'needs repro' vs
  'needs-repro')
- Replace MCP GitHub server with gh CLI — simpler, no Docker dependency,
  chaining is caught by the action so permissions are equivalent
- Add lifecycle labels (needs-repro, needs-info) for bugs missing info
- Add invalid label for off-topic issues (Claude API, billing, etc.)
- Add anti-patterns to prevent false positives (don't require specific
  format, model behavior issues don't need traditional repro, etc.)

Sweep script changes:
- Absorb stale issue detection (was separate stale-issue-manager workflow)
- Mark issues as stale after 14 days of inactivity
- Skip assigned issues (team is working on it internally)
- Skip enhancements with 10+ thumbs up (community wants it)
- Add invalid label with 3-day timeout
- Add autoclose label support to drain 200+ legacy issues
- Drop needs-votes (stale handles inactive enhancements)
- Unify close messages into a single template with per-label reasons
- Run 2x daily instead of once

Delete stale-issue-manager.yml — its logic is now in sweep.ts.

## Test plan

Dry-run sweep locally:
GITHUB_TOKEN=$(gh auth token) GITHUB_REPOSITORY_OWNER=anthropics   GITHUB_REPOSITORY_NAME=claude-code bun run scripts/sweep.ts --dry-run

Triage workflow will be tested by opening a test issue after merge.

* Update .github/workflows/claude-issue-triage.yml

Co-authored-by: Ashwin Bhat <ashwin@anthropic.com>

---------

Co-authored-by: Ashwin Bhat <ashwin@anthropic.com>
2026-02-12 11:45:31 -08:00
GitHub Actions
0931fb76da chore: Update CHANGELOG.md 2026-02-12 17:25:52 +00:00
Chris Lloyd
bac22cb316 Merge pull request #25210 from anthropics/chrislloyd/sweep-lifecycle-labels
Add daily sweep to enforce issue lifecycle label timeouts
2026-02-12 07:26:14 -08:00
GitHub Actions
77df0af778 chore: Update CHANGELOG.md 2026-02-12 09:24:03 +00:00
Chris Lloyd
a17040212c Add daily sweep to enforce issue lifecycle label timeouts
Introduce a simple, mechanical daily sweep that closes issues with
lifecycle labels past their timeout:

- needs-repro: 7 days
- needs-info: 7 days
- needs-votes: 30 days
- stale: 30 days

The sweep checks when the label was last applied via the events API,
and closes the issue if the timeout has elapsed. No AI, no comment
checking — if the label is still there past its timeout, close it.
Removing a label (by a triager, slash command, or future AI retriage)
is what prevents closure.

Each close message directs the reporter to open a new issue rather
than engaging with the closed one.

The script supports --dry-run for local testing:
  GITHUB_TOKEN=$(gh auth token) \
  GITHUB_REPOSITORY_OWNER=anthropics \
  GITHUB_REPOSITORY_NAME=claude-code \
  bun run scripts/sweep.ts --dry-run

## Test plan

Ran --dry-run against anthropics/claude-code. Correctly identified 3
issues past their timeouts (1 needs-repro at 12d, 2 needs-info at
14d and 26d). No false positives.
2026-02-11 22:34:23 -08:00
GitHub Actions
76a2154fd5 chore: Update CHANGELOG.md 2026-02-12 05:59:28 +00:00
GitHub Actions
aca4801e91 chore: Update CHANGELOG.md 2026-02-12 01:56:04 +00:00
kashyap murali
f2a930799b Merge pull request #25102 from anthropics/fvolcic/code-review-comment-update
fix: Do not comment if --comment is not present
2026-02-11 23:26:35 +01:00
Franklin Volcic
6dcc7d8b76 ensure comments are not left if --comment is not present 2026-02-11 13:42:06 -08:00
GitHub Actions
0a0135f687 chore: Update CHANGELOG.md 2026-02-11 16:20:54 +00:00
GitHub Actions
e74abe58ab chore: Update CHANGELOG.md 2026-02-11 16:07:18 +00:00
GitHub Actions
7a7bed74e3 chore: Update CHANGELOG.md 2026-02-11 15:54:36 +00:00
GitHub Actions
9b64827a25 chore: Update CHANGELOG.md 2026-02-11 09:49:08 +00:00
GitHub Actions
54f0b535b3 chore: Update CHANGELOG.md 2026-02-11 09:47:22 +00:00
GitHub Actions
675baffdb3 chore: Update CHANGELOG.md 2026-02-11 08:54:27 +00:00
GitHub Actions
bae169824d chore: Update CHANGELOG.md 2026-02-11 08:39:40 +00:00
GitHub Actions
0b641a77ce chore: Update CHANGELOG.md 2026-02-11 08:21:20 +00:00
GitHub Actions
be5d08fe5f chore: Update CHANGELOG.md 2026-02-10 23:10:48 +00:00
GitHub Actions
19bb071fe0 chore: Update CHANGELOG.md 2026-02-10 00:52:42 +00:00
GitHub Actions
85f2807991 chore: Update CHANGELOG.md 2026-02-07 19:09:18 +00:00
GitHub Actions
e7f36bcdf0 chore: Update CHANGELOG.md 2026-02-07 18:01:26 +00:00
GitHub Actions
2bc62d1456 chore: Update CHANGELOG.md 2026-02-06 14:26:00 +00:00
GitHub Actions
ef1e0ac098 chore: Update CHANGELOG.md 2026-02-06 01:46:32 +00:00
12 changed files with 2111 additions and 226 deletions

View File

@@ -1,13 +1,20 @@
name: Claude Issue Triage
description: Automatically triage GitHub issues using Claude Code
on:
issues:
types: [opened]
issue_comment:
types: [created]
jobs:
triage-issue:
runs-on: ubuntu-latest
timeout-minutes: 10
if: >-
github.event_name == 'issues' ||
(github.event_name == 'issue_comment' && !github.event.issue.pull_request && github.event.comment.user.type != 'Bot')
concurrency:
group: issue-triage-${{ github.event.issue.number }}
cancel-in-progress: true
permissions:
contents: read
issues: write
@@ -17,30 +24,6 @@ jobs:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup GitHub MCP Server
run: |
mkdir -p /tmp/mcp-config
cat > /tmp/mcp-config/mcp-servers.json << 'EOF'
{
"mcpServers": {
"github": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"GITHUB_PERSONAL_ACCESS_TOKEN",
"ghcr.io/github/github-mcp-server:sha-7aced2b"
],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
}
}
}
}
EOF
- name: Run Claude Code for Issue Triage
timeout-minutes: 5
uses: anthropics/claude-code-action@v1
@@ -50,56 +33,72 @@ jobs:
github_token: ${{ secrets.GITHUB_TOKEN }}
allowed_non_write_users: "*"
prompt: |
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
You're an issue triage assistant. Analyze the issue and manage labels.
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
IMPORTANT: Don't post any comments or messages to the issue. Your only actions are adding or removing labels.
Issue Information:
Context:
- REPO: ${{ github.repository }}
- ISSUE_NUMBER: ${{ github.event.issue.number }}
- EVENT: ${{ github.event_name }}
TASK OVERVIEW:
ALLOWED LABELS — you may ONLY use labels from this list. Never invent new labels.
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
Type: bug, enhancement, question, documentation, duplicate, invalid
Lifecycle: needs-repro, needs-info
Platform: platform:linux, platform:macos, platform:windows, platform:wsl, platform:ios, platform:android, platform:vscode, platform:intellij, platform:web, platform:aws-bedrock
API: api:bedrock, api:vertex
2. Next, use the GitHub tools to get context about the issue:
- You have access to these tools:
- mcp__github__get_issue: Use this to retrieve the current issue's details including title, description, and existing labels
- mcp__github__get_issue_comments: Use this to read any discussion or additional context provided in the comments
- mcp__github__update_issue: Use this to apply labels to the issue (do not use this for commenting)
- mcp__github__search_issues: Use this to find similar issues that might provide context for proper categorization and to identify potential duplicate issues
- mcp__github__list_issues: Use this to understand patterns in how other issues are labeled
- Start by using mcp__github__get_issue to get the issue details
TOOLS:
- `gh issue view NUMBER`: Read the issue title, body, and labels
- `gh issue view NUMBER --comments`: Read the conversation
- `gh search issues QUERY`: Find similar or duplicate issues
- `gh issue edit NUMBER --add-label` / `--remove-label`: Add or remove labels
3. Analyze the issue content, considering:
- The issue title and description
- The type of issue (bug report, feature request, question, etc.)
- Technical areas mentioned
- Severity or priority indicators
- User impact
- Components affected
TASK:
4. Select appropriate labels from the available labels list provided above:
- Choose labels that accurately reflect the issue's nature
- Be specific but comprehensive
- Select priority labels if you can determine urgency (high-priority, med-priority, or low-priority)
- Consider platform labels (android, ios) if applicable
- If you find similar issues using mcp__github__search_issues, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
1. Run `gh issue view ${{ github.event.issue.number }}` to read the issue details.
2. Run `gh issue view ${{ github.event.issue.number }} --comments` to read the conversation.
5. Apply the selected labels:
- Use mcp__github__update_issue to apply your selected labels
- DO NOT post any comments explaining your decision
- DO NOT communicate directly with users
- If no labels are clearly applicable, do not apply any labels
**If EVENT is "issues" (new issue):**
IMPORTANT GUIDELINES:
- Be thorough in your analysis
- Only select labels from the provided list above
3. First, check if this issue is actually about Claude Code (the CLI/IDE tool). Issues about the Claude API, claude.ai, the Claude app, Anthropic billing, or other Anthropic products should be labeled `invalid`. If invalid, apply only that label and stop.
4. Analyze and apply category labels:
- Type (bug, enhancement, question, etc.)
- Technical areas and platform
- Check for duplicates with `gh search issues`. Only mark as duplicate of OPEN issues.
5. Evaluate lifecycle labels:
- `needs-repro` (bugs only, 7 days): Bug reports without clear steps to reproduce. A good repro has specific, followable steps that someone else could use to see the same issue.
Do NOT apply if the user already provided error messages, logs, file paths, or a description of what they did. Don't require a specific format — narrative descriptions count.
For model behavior issues (e.g. "Claude does X when it should do Y"), don't require traditional repro steps — examples and patterns are sufficient.
- `needs-info` (bugs only, 7 days): The issue needs something from the community before it can progress — e.g. error messages, versions, environment details, or answers to follow-up questions. Don't apply to questions or enhancements.
Do NOT apply if the user already provided version, environment, and error details. If the issue just needs engineering investigation, that's not `needs-info`.
Issues with these labels are automatically closed after the timeout if there's no response.
The goal is to avoid issues lingering without a clear next step.
6. Apply all selected labels:
`gh issue edit ${{ github.event.issue.number }} --add-label "label1" --add-label "label2"`
**If EVENT is "issue_comment" (comment on existing issue):**
3. Evaluate lifecycle labels based on the full conversation:
- If the issue has `needs-repro` or `needs-info` and the missing information has now been provided, remove the label:
`gh issue edit ${{ github.event.issue.number }} --remove-label "needs-repro"`
- If the issue doesn't have lifecycle labels but clearly needs them (e.g., a maintainer asked for repro steps or more details), add the appropriate label.
- Comments like "+1", "me too", "same here", or emoji reactions are NOT the missing information. Only remove labels when substantive details are actually provided.
- Do NOT add or remove category labels (bug, enhancement, etc.) on comment events.
GUIDELINES:
- ONLY use labels from the ALLOWED LABELS list above — never create or guess label names
- DO NOT post any comments to the issue
- Your ONLY action should be to apply labels using mcp__github__update_issue
- Be conservative with lifecycle labels — only apply when clearly warranted
- Only apply lifecycle labels (`needs-repro`, `needs-info`) to bugs — never to questions or enhancements
- When in doubt, don't apply a lifecycle label — false positives are worse than missing labels
- It's okay to not add any labels if none are clearly applicable
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: |
--model claude-sonnet-4-5-20250929
--mcp-config /tmp/mcp-config/mcp-servers.json
--allowedTools "Bash(gh label list),mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue,mcp__github__search_issues,mcp__github__list_issues"
--model claude-opus-4-6
--allowedTools "Bash(gh issue view:*),Bash(gh issue edit:*),Bash(gh search issues:*)"

View File

@@ -1,157 +0,0 @@
name: "Manage Stale Issues"
on:
schedule:
# 2am Pacific = 9am UTC (10am UTC during DST)
- cron: "0 10 * * *"
workflow_dispatch:
permissions:
issues: write
concurrency:
group: stale-issue-manager
jobs:
manage-stale-issues:
runs-on: ubuntu-latest
steps:
- name: Manage stale issues
uses: actions/github-script@v7
with:
script: |
const oneMonthAgo = new Date();
oneMonthAgo.setDate(oneMonthAgo.getDate() - 30);
const twoMonthsAgo = new Date();
twoMonthsAgo.setDate(twoMonthsAgo.getDate() - 60);
const warningComment = `This issue has been inactive for 30 days. If the issue is still occurring, please comment to let us know. Otherwise, this issue will be automatically closed in 30 days for housekeeping purposes.`;
const closingComment = `This issue has been automatically closed due to 60 days of inactivity. If you're still experiencing this issue, please open a new issue with updated information.`;
let page = 1;
let hasMore = true;
let totalWarned = 0;
let totalClosed = 0;
let totalLabeled = 0;
while (hasMore) {
// Get open issues sorted by last updated (oldest first)
const { data: issues } = await github.rest.issues.listForRepo({
owner: context.repo.owner,
repo: context.repo.repo,
state: 'open',
sort: 'updated',
direction: 'asc',
per_page: 100,
page: page
});
if (issues.length === 0) {
hasMore = false;
break;
}
for (const issue of issues) {
// Skip if already locked
if (issue.locked) continue;
// Skip pull requests
if (issue.pull_request) continue;
// Check if updated more recently than 30 days ago
const updatedAt = new Date(issue.updated_at);
if (updatedAt > oneMonthAgo) {
// Since issues are sorted by updated_at ascending,
// once we hit a recent issue, all remaining will be recent too
hasMore = false;
break;
}
// Check if issue has autoclose label
const hasAutocloseLabel = issue.labels.some(label =>
typeof label === 'object' && label.name === 'autoclose'
);
try {
// Get comments to check for existing warning
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issue.number,
per_page: 100
});
// Find the last comment from github-actions bot
const botComments = comments.filter(comment =>
comment.user && comment.user.login === 'github-actions[bot]' &&
comment.body && comment.body.includes('inactive for 30 days')
);
const lastBotComment = botComments[botComments.length - 1];
if (lastBotComment) {
// Check if the bot comment is older than 30 days (total 60 days of inactivity)
const botCommentDate = new Date(lastBotComment.created_at);
if (botCommentDate < oneMonthAgo) {
// Close the issue - it's been stale for 60+ days
console.log(`Closing issue #${issue.number} (stale for 60+ days): ${issue.title}`);
// Post closing comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issue.number,
body: closingComment
});
// Close the issue
await github.rest.issues.update({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issue.number,
state: 'closed',
state_reason: 'not_planned'
});
totalClosed++;
}
// If bot comment exists but is recent, issue already has warning
} else if (updatedAt < oneMonthAgo) {
// No bot warning yet, issue is 30+ days old
console.log(`Warning issue #${issue.number} (stale for 30+ days): ${issue.title}`);
// Post warning comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issue.number,
body: warningComment
});
totalWarned++;
// Add autoclose label if not present
if (!hasAutocloseLabel) {
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issue.number,
labels: ['autoclose']
});
totalLabeled++;
}
}
} catch (error) {
console.error(`Failed to process issue #${issue.number}: ${error.message}`);
}
}
page++;
}
console.log(`Summary:`);
console.log(`- Issues warned (30 days stale): ${totalWarned}`);
console.log(`- Issues labeled with autoclose: ${totalLabeled}`);
console.log(`- Issues closed (60 days stale): ${totalClosed}`);

31
.github/workflows/sweep.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
name: "Issue Sweep"
on:
schedule:
- cron: "0 10,22 * * *"
workflow_dispatch:
permissions:
issues: write
concurrency:
group: daily-issue-sweep
jobs:
sweep:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Bun
uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- name: Enforce lifecycle timeouts
run: bun run scripts/sweep.ts
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}

View File

@@ -1,5 +1,67 @@
# Changelog
## 2.1.39
- Added guard against launching Claude Code inside another Claude Code session
- Fixed Agent Teams using wrong model identifier for Bedrock, Vertex, and Foundry customers
- Fixed a crash when MCP tools return image content during streaming
- Fixed /resume session previews showing raw XML tags instead of readable command names
- Improved model error messages for Bedrock/Vertex/Foundry users with fallback suggestions
- Fixed plugin browse showing misleading "Space to Toggle" hint for already-installed plugins
- Fixed hook blocking errors (exit code 2) not showing stderr to the user
- Added `speed` attribute to OTel events and trace spans for fast mode visibility
- Fixed /resume showing interrupt messages as session titles
- Fixed Opus 4.6 launch announcement showing for Bedrock/Vertex/Foundry users
- Improved error message for many-image dimension limit errors with /compact suggestion
- Fixed structured-outputs beta header being sent unconditionally on Vertex/Bedrock
- Improved terminal rendering performance
- Fixed fatal errors being swallowed instead of displayed
- Fixed process hanging after session close
- Fixed character loss at terminal screen boundary
- Fixed blank lines in verbose transcript view
## 2.1.38
- Fixed VS Code terminal scroll-to-top regression introduced in 2.1.37
- Fixed Tab key queueing slash commands instead of autocompleting
- Fixed bash permission matching for commands using environment variable wrappers
- Fixed text between tool uses disappearing when not using streaming
- Fixed duplicate sessions when resuming in VS Code extension
- Improved heredoc delimiter parsing to prevent command smuggling
- Blocked writes to `.claude/skills` directory in sandbox mode
## 2.1.37
- Fixed an issue where /fast was not immediately available after enabling /extra-usage
## 2.1.36
- Fast mode is now available for Opus 4.6. Learn more at https://code.claude.com/docs/en/fast-mode
## 2.1.34
- Fixed a crash when agent teams setting changed between renders
- Fixed a bug where commands excluded from sandboxing (via `sandbox.excludedCommands` or `dangerouslyDisableSandbox`) could bypass the Bash ask permission rule when `autoAllowBashIfSandboxed` was enabled
## 2.1.33
- Fixed agent teammate sessions in tmux to send and receive messages
- Fixed warnings about agent teams not being available on your current plan
- Added `TeammateIdle` and `TaskCompleted` hook events for multi-agent workflows
- Added support for restricting which sub-agents can be spawned via `Task(agent_type)` syntax in agent "tools" frontmatter
- Added `memory` frontmatter field support for agents, enabling persistent memory with `user`, `project`, or `local` scope
- Added plugin name to skill descriptions and `/skills` menu for better discoverability
- Fixed an issue where submitting a new message while the model was in extended thinking would interrupt the thinking phase
- Fixed an API error that could occur when aborting mid-stream, where whitespace text combined with a thinking block would bypass normalization and produce an invalid request
- Fixed API proxy compatibility issue where 404 errors on streaming endpoints no longer triggered non-streaming fallback
- Fixed an issue where proxy settings configured via `settings.json` environment variables were not applied to WebFetch and other HTTP requests on the Node.js build
- Fixed `/resume` session picker showing raw XML markup instead of clean titles for sessions started with slash commands
- Improved error messages for API connection failures — now shows specific cause (e.g., ECONNREFUSED, SSL errors) instead of generic "Connection error"
- Errors from invalid managed settings are now surfaced
- VSCode: Added support for remote sessions, allowing OAuth users to browse and resume sessions from claude.ai
- VSCode: Added git branch and message count to the session picker, with support for searching by branch name
- VSCode: Fixed scroll-to-bottom under-scrolling on initial session load and session switch
## 2.1.32
- Claude Opus 4.6 is now available!

View File

@@ -56,10 +56,15 @@ Note: Still review Claude generated PR's.
6. Filter out any issues that were not validated in step 5. This step will give us our list of high signal issues for our review.
7. If issues were found, skip to step 8 to post inline comments directly.
7. Output a summary of the review findings to the terminal:
- If issues were found, list each issue with a brief description.
- If no issues were found, state: "No issues found. Checked for bugs and CLAUDE.md compliance."
If NO issues were found, post a summary comment using `gh pr comment` (if `--comment` argument is provided):
"No issues found. Checked for bugs and CLAUDE.md compliance."
If `--comment` argument was NOT provided, stop here. Do not post any GitHub comments.
If `--comment` argument IS provided and NO issues were found, post a summary comment using `gh pr comment` and stop.
If `--comment` argument IS provided and issues were found, continue to step 8.
8. Create a list of all comments that you plan on leaving. This is only for you to make sure you are comfortable with the comments. Do not post this list anywhere.
@@ -85,7 +90,7 @@ Notes:
- Use gh CLI to interact with GitHub (e.g., fetch pull requests, create comments). Do not use web fetch.
- Create a todo list before starting.
- You must cite and link each issue in inline comments (e.g., if referring to a CLAUDE.md, include a link to it).
- If no issues are found, post a comment with the following format:
- If no issues are found and `--comment` argument is provided, post a comment with the following format:
---

View File

@@ -247,9 +247,13 @@ class RuleEngine:
if field == 'file_path':
return tool_input.get('file_path', '')
elif field in ['new_text', 'content']:
# Concatenate all edits
# Concatenate all edits, handling malformed entries gracefully
edits = tool_input.get('edits', [])
return ' '.join(e.get('new_string', '') for e in edits)
parts = []
for e in edits:
if isinstance(e, dict):
parts.append(e.get('new_string', ''))
return ' '.join(parts)
return None

View File

@@ -0,0 +1 @@
"""Hookify integration tests."""

View File

@@ -0,0 +1,208 @@
"""Pytest fixtures for hookify integration tests."""
import os
import sys
import json
import tempfile
import shutil
from pathlib import Path
from typing import Generator, Dict, Any, List
import pytest
# Add parent directories to path for imports
PLUGIN_ROOT = Path(__file__).parent.parent
PLUGINS_DIR = PLUGIN_ROOT.parent
sys.path.insert(0, str(PLUGINS_DIR))
sys.path.insert(0, str(PLUGIN_ROOT))
from hookify.core.config_loader import Rule, Condition, load_rules, extract_frontmatter
from hookify.core.rule_engine import RuleEngine
@pytest.fixture
def rule_engine() -> RuleEngine:
"""Create a RuleEngine instance."""
return RuleEngine()
@pytest.fixture
def temp_project_dir() -> Generator[Path, None, None]:
"""Create a temporary project directory with .claude folder.
This fixture creates a clean temp directory and changes to it,
then restores the original directory after the test.
"""
original_dir = os.getcwd()
temp_dir = tempfile.mkdtemp(prefix="hookify_test_")
# Create .claude directory for rule files
claude_dir = Path(temp_dir) / ".claude"
claude_dir.mkdir()
os.chdir(temp_dir)
yield Path(temp_dir)
os.chdir(original_dir)
shutil.rmtree(temp_dir)
@pytest.fixture
def sample_rule_file(temp_project_dir: Path) -> Path:
"""Create a sample rule file for testing."""
rule_content = """---
name: block-rm-rf
enabled: true
event: bash
action: block
conditions:
- field: command
operator: regex_match
pattern: rm\\s+-rf
---
**Dangerous command blocked!**
The `rm -rf` command can permanently delete files. Please use safer alternatives.
"""
rule_file = temp_project_dir / ".claude" / "hookify.dangerous-commands.local.md"
rule_file.write_text(rule_content)
return rule_file
@pytest.fixture
def create_rule_file(temp_project_dir: Path):
"""Factory fixture to create rule files with custom content."""
def _create(name: str, content: str) -> Path:
rule_file = temp_project_dir / ".claude" / f"hookify.{name}.local.md"
rule_file.write_text(content)
return rule_file
return _create
@pytest.fixture
def sample_bash_input() -> Dict[str, Any]:
"""Sample PreToolUse input for Bash tool."""
return {
"session_id": "test-session-123",
"hook_event_name": "PreToolUse",
"tool_name": "Bash",
"tool_input": {
"command": "ls -la"
},
"cwd": "/test/project"
}
@pytest.fixture
def sample_write_input() -> Dict[str, Any]:
"""Sample PreToolUse input for Write tool."""
return {
"session_id": "test-session-123",
"hook_event_name": "PreToolUse",
"tool_name": "Write",
"tool_input": {
"file_path": "/test/project/src/main.py",
"content": "print('hello world')"
},
"cwd": "/test/project"
}
@pytest.fixture
def sample_edit_input() -> Dict[str, Any]:
"""Sample PreToolUse input for Edit tool."""
return {
"session_id": "test-session-123",
"hook_event_name": "PreToolUse",
"tool_name": "Edit",
"tool_input": {
"file_path": "/test/project/src/main.py",
"old_string": "hello",
"new_string": "goodbye"
},
"cwd": "/test/project"
}
@pytest.fixture
def sample_multiedit_input() -> Dict[str, Any]:
"""Sample PreToolUse input for MultiEdit tool."""
return {
"session_id": "test-session-123",
"hook_event_name": "PreToolUse",
"tool_name": "MultiEdit",
"tool_input": {
"file_path": "/test/project/src/main.py",
"edits": [
{"old_string": "foo", "new_string": "bar"},
{"old_string": "baz", "new_string": "qux"}
]
},
"cwd": "/test/project"
}
@pytest.fixture
def sample_stop_input(temp_project_dir: Path) -> Dict[str, Any]:
"""Sample Stop event input with transcript file."""
# Create a transcript file
transcript_file = temp_project_dir / "transcript.txt"
transcript_file.write_text("""
User: Please implement the feature
Assistant: I'll implement that feature now.
[Uses Write tool to create file]
User: Great, now run the tests
Assistant: Running tests...
[Uses Bash tool: npm test]
All tests passed!
""")
return {
"session_id": "test-session-123",
"hook_event_name": "Stop",
"reason": "Task completed",
"transcript_path": str(transcript_file),
"cwd": str(temp_project_dir)
}
@pytest.fixture
def sample_userprompt_input() -> Dict[str, Any]:
"""Sample UserPromptSubmit event input."""
return {
"session_id": "test-session-123",
"hook_event_name": "UserPromptSubmit",
"user_prompt": "Please delete all files in the directory",
"cwd": "/test/project"
}
def make_rule(
name: str,
event: str,
conditions: List[Dict[str, str]],
action: str = "warn",
message: str = "Test message",
enabled: bool = True,
tool_matcher: str = None
) -> Rule:
"""Helper function to create Rule objects for testing."""
cond_objects = [
Condition(
field=c.get("field", ""),
operator=c.get("operator", "regex_match"),
pattern=c.get("pattern", "")
)
for c in conditions
]
return Rule(
name=name,
enabled=enabled,
event=event,
conditions=cond_objects,
action=action,
message=message,
tool_matcher=tool_matcher
)

View File

@@ -0,0 +1,497 @@
"""Tests for error handling and fault tolerance in hookify.
Tests cover:
- Graceful handling of missing files
- Invalid JSON/YAML handling
- Regex compilation errors
- Transcript file access errors
- Import failures
- Edge cases and boundary conditions
"""
import pytest
import os
from pathlib import Path
from typing import Dict, Any
from unittest.mock import patch, mock_open
from hookify.core.config_loader import load_rules, load_rule_file, extract_frontmatter
from hookify.core.rule_engine import RuleEngine, compile_regex
class TestTranscriptFileErrors:
"""Tests for handling transcript file access errors."""
def test_missing_transcript_file(self, rule_engine: RuleEngine, temp_project_dir):
"""Test handling when transcript file doesn't exist."""
stop_input = {
"hook_event_name": "Stop",
"reason": "Done",
"transcript_path": "/nonexistent/transcript.txt",
}
rules = [
_make_rule(
name="check-transcript",
event="stop",
conditions=[{"field": "transcript", "operator": "contains", "pattern": "test"}],
action="warn",
message="Test message"
),
]
# Should not crash, transcript returns empty string
result = rule_engine.evaluate_rules(rules, stop_input)
# Rule shouldn't match since transcript is empty
assert result == {}
def test_unreadable_transcript_file(self, rule_engine: RuleEngine, temp_project_dir):
"""Test handling when transcript file is unreadable."""
# Create file and remove read permissions
transcript_file = temp_project_dir / "unreadable.txt"
transcript_file.write_text("content")
os.chmod(transcript_file, 0o000)
stop_input = {
"hook_event_name": "Stop",
"reason": "Done",
"transcript_path": str(transcript_file),
}
rules = [
_make_rule(
name="check-transcript",
event="stop",
conditions=[{"field": "transcript", "operator": "contains", "pattern": "test"}],
action="warn",
message="Test"
),
]
try:
# Should not crash
result = rule_engine.evaluate_rules(rules, stop_input)
assert result == {} # No match since transcript couldn't be read
finally:
# Restore permissions for cleanup
os.chmod(transcript_file, 0o644)
class TestRegexErrors:
"""Tests for regex compilation and matching errors."""
def test_invalid_regex_pattern(self, rule_engine: RuleEngine):
"""Test handling of invalid regex patterns."""
input_data = {
"hook_event_name": "PreToolUse",
"tool_name": "Bash",
"tool_input": {"command": "ls -la"}
}
rules = [
_make_rule(
name="invalid-regex",
event="bash",
conditions=[{"field": "command", "operator": "regex_match", "pattern": "[unclosed"}],
action="block",
message="Should not match"
),
]
# Should not crash, invalid regex returns False (no match)
result = rule_engine.evaluate_rules(rules, input_data)
assert result == {}
def test_catastrophic_backtracking_regex(self, rule_engine: RuleEngine):
"""Test handling of potentially slow regex patterns."""
input_data = {
"hook_event_name": "PreToolUse",
"tool_name": "Bash",
"tool_input": {"command": "a" * 100}
}
# This pattern could cause catastrophic backtracking in some engines
# Python's re module handles this reasonably well
rules = [
_make_rule(
name="complex-regex",
event="bash",
conditions=[{"field": "command", "operator": "regex_match", "pattern": "(a+)+$"}],
action="warn",
message="Matched"
),
]
# Should complete without hanging
result = rule_engine.evaluate_rules(rules, input_data)
assert "Matched" in result.get("systemMessage", "")
def test_regex_cache(self):
"""Test that regex patterns are cached."""
pattern = r"test\s+pattern"
# Compile same pattern twice
regex1 = compile_regex(pattern)
regex2 = compile_regex(pattern)
# Should be the same object due to caching
assert regex1 is regex2
class TestMalformedInput:
"""Tests for handling malformed input data."""
def test_missing_tool_name(self, rule_engine: RuleEngine):
"""Test handling input without tool_name."""
input_data = {
"hook_event_name": "PreToolUse",
# Missing tool_name
"tool_input": {"command": "test"}
}
rules = [
_make_rule(
name="test-rule",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "test"}],
action="warn",
message="Test"
),
]
# Should not crash
result = rule_engine.evaluate_rules(rules, input_data)
# May or may not match depending on implementation
def test_missing_tool_input(self, rule_engine: RuleEngine):
"""Test handling input without tool_input."""
input_data = {
"hook_event_name": "PreToolUse",
"tool_name": "Bash",
# Missing tool_input
}
rules = [
_make_rule(
name="test-rule",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "test"}],
action="warn",
message="Test"
),
]
# Should not crash
result = rule_engine.evaluate_rules(rules, input_data)
assert result == {} # No match with missing input
def test_null_values_in_input(self, rule_engine: RuleEngine):
"""Test handling None values in tool_input."""
input_data = {
"hook_event_name": "PreToolUse",
"tool_name": "Bash",
"tool_input": {
"command": None
}
}
rules = [
_make_rule(
name="test-rule",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "test"}],
action="warn",
message="Test"
),
]
# Should not crash
result = rule_engine.evaluate_rules(rules, input_data)
def test_non_string_field_values(self, rule_engine: RuleEngine):
"""Test handling non-string values that get converted."""
input_data = {
"hook_event_name": "PreToolUse",
"tool_name": "Bash",
"tool_input": {
"command": 123 # Number instead of string
}
}
rules = [
_make_rule(
name="test-rule",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "123"}],
action="warn",
message="Found number"
),
]
result = rule_engine.evaluate_rules(rules, input_data)
# Should convert to string and match
assert "Found number" in result.get("systemMessage", "")
class TestRuleFileErrors:
"""Tests for rule file loading errors."""
def test_malformed_yaml(self, create_rule_file):
"""Test handling of malformed YAML in frontmatter."""
content = """---
name: test
enabled: [unclosed bracket
---
message
"""
rule_file = create_rule_file("malformed", content)
rule = load_rule_file(str(rule_file))
# Should handle gracefully (may return None or partial data)
# The custom YAML parser is lenient
def test_unicode_errors(self, temp_project_dir):
"""Test handling of files with invalid unicode."""
rule_file = temp_project_dir / ".claude" / "hookify.unicode.local.md"
# Write binary content that's not valid UTF-8
with open(rule_file, 'wb') as f:
f.write(b"---\nname: test\n---\n\xff\xfe invalid unicode")
rule = load_rule_file(str(rule_file))
assert rule is None # Should return None for encoding errors
def test_empty_file(self, create_rule_file):
"""Test handling of empty rule file."""
rule_file = create_rule_file("empty", "")
rule = load_rule_file(str(rule_file))
assert rule is None
class TestFieldExtractionErrors:
"""Tests for field extraction edge cases."""
def test_unknown_field_name(self, rule_engine: RuleEngine):
"""Test handling of unknown field names."""
input_data = {
"hook_event_name": "PreToolUse",
"tool_name": "Bash",
"tool_input": {"command": "test"}
}
rules = [
_make_rule(
name="test-rule",
event="bash",
conditions=[{"field": "nonexistent_field", "operator": "contains", "pattern": "test"}],
action="warn",
message="Test"
),
]
# Should not crash, unknown field returns None -> no match
result = rule_engine.evaluate_rules(rules, input_data)
assert result == {}
def test_multiedit_with_empty_edits(self, rule_engine: RuleEngine):
"""Test MultiEdit tool with empty edits array."""
input_data = {
"hook_event_name": "PreToolUse",
"tool_name": "MultiEdit",
"tool_input": {
"file_path": "/test/file.py",
"edits": [] # Empty edits
}
}
rules = [
_make_rule(
name="test-rule",
event="file",
conditions=[{"field": "new_text", "operator": "contains", "pattern": "test"}],
action="warn",
message="Test"
),
]
# Should not crash
result = rule_engine.evaluate_rules(rules, input_data)
assert result == {}
def test_multiedit_with_malformed_edits(self, rule_engine: RuleEngine):
"""Test MultiEdit tool with malformed edit entries."""
input_data = {
"hook_event_name": "PreToolUse",
"tool_name": "MultiEdit",
"tool_input": {
"file_path": "/test/file.py",
"edits": [
{"invalid": "entry"}, # Missing new_string
None, # Null entry
"not a dict" # Wrong type
]
}
}
rules = [
_make_rule(
name="test-rule",
event="file",
conditions=[{"field": "new_text", "operator": "contains", "pattern": "test"}],
action="warn",
message="Test"
),
]
# Should handle gracefully
result = rule_engine.evaluate_rules(rules, input_data)
class TestOperatorEdgeCases:
"""Tests for operator edge cases."""
def test_unknown_operator(self, rule_engine: RuleEngine):
"""Test handling of unknown operator."""
input_data = {
"hook_event_name": "PreToolUse",
"tool_name": "Bash",
"tool_input": {"command": "test"}
}
rules = [
_make_rule(
name="test-rule",
event="bash",
conditions=[{"field": "command", "operator": "unknown_op", "pattern": "test"}],
action="warn",
message="Test"
),
]
# Unknown operator returns False -> no match
result = rule_engine.evaluate_rules(rules, input_data)
assert result == {}
def test_empty_pattern(self, rule_engine: RuleEngine):
"""Test handling of empty pattern."""
input_data = {
"hook_event_name": "PreToolUse",
"tool_name": "Bash",
"tool_input": {"command": "test"}
}
rules = [
_make_rule(
name="test-rule",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": ""}],
action="warn",
message="Empty pattern"
),
]
result = rule_engine.evaluate_rules(rules, input_data)
# Empty string is contained in any string
assert "Empty pattern" in result.get("systemMessage", "")
def test_special_characters_in_pattern(self, rule_engine: RuleEngine):
"""Test patterns with special regex characters when using 'contains'."""
input_data = {
"hook_event_name": "PreToolUse",
"tool_name": "Bash",
"tool_input": {"command": "echo $HOME"}
}
rules = [
_make_rule(
name="test-rule",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "$HOME"}],
action="warn",
message="Found $HOME"
),
]
result = rule_engine.evaluate_rules(rules, input_data)
# 'contains' does literal string matching, not regex
assert "Found $HOME" in result.get("systemMessage", "")
class TestConcurrentRuleEvaluation:
"""Tests for multiple rules with various states."""
def test_mixed_match_states(self, rule_engine: RuleEngine):
"""Test evaluation with mix of matching and non-matching rules."""
input_data = {
"hook_event_name": "PreToolUse",
"tool_name": "Bash",
"tool_input": {"command": "ls -la"}
}
rules = [
_make_rule(
name="match-ls",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "ls"}],
action="warn",
message="Found ls"
),
_make_rule(
name="no-match-rm",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "rm"}],
action="block",
message="Found rm"
),
_make_rule(
name="match-dash",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "-"}],
action="warn",
message="Found dash"
),
]
result = rule_engine.evaluate_rules(rules, input_data)
# Should have warnings from matching rules
assert "Found ls" in result.get("systemMessage", "")
assert "Found dash" in result.get("systemMessage", "")
# Should not have blocking (rm rule didn't match)
assert "hookSpecificOutput" not in result
def test_empty_rules_list(self, rule_engine: RuleEngine):
"""Test evaluation with empty rules list."""
input_data = {
"hook_event_name": "PreToolUse",
"tool_name": "Bash",
"tool_input": {"command": "ls"}
}
result = rule_engine.evaluate_rules([], input_data)
assert result == {}
# Helper function to create rules for tests
def _make_rule(name, event, conditions, action="warn", message="Test", enabled=True, tool_matcher=None):
"""Helper to create Rule objects."""
from hookify.core.config_loader import Rule, Condition
cond_objects = [
Condition(
field=c.get("field", ""),
operator=c.get("operator", "regex_match"),
pattern=c.get("pattern", "")
)
for c in conditions
]
return Rule(
name=name,
enabled=enabled,
event=event,
conditions=cond_objects,
action=action,
message=message,
tool_matcher=tool_matcher
)

View File

@@ -0,0 +1,662 @@
"""Integration tests for multi-hook scenarios in hookify.
Tests cover:
- Multiple hooks running against same input
- Hook priority (blocking rules over warnings)
- Cross-event state management
- Different tool types with varying field structures
- Error handling and fault tolerance
"""
import pytest
from typing import Dict, Any, List
from hookify.core.config_loader import Rule, Condition, load_rules
from hookify.core.rule_engine import RuleEngine
def make_rule(
name: str,
event: str,
conditions: List[Dict[str, str]],
action: str = "warn",
message: str = "Test message",
enabled: bool = True,
tool_matcher: str = None
) -> Rule:
"""Helper function to create Rule objects for testing."""
cond_objects = [
Condition(
field=c.get("field", ""),
operator=c.get("operator", "regex_match"),
pattern=c.get("pattern", "")
)
for c in conditions
]
return Rule(
name=name,
enabled=enabled,
event=event,
conditions=cond_objects,
action=action,
message=message,
tool_matcher=tool_matcher
)
class TestMultipleRulesEvaluation:
"""Tests for evaluating multiple rules against the same input."""
def test_multiple_warning_rules_combined(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
"""Multiple warning rules should combine their messages."""
rules = [
make_rule(
name="warn-ls",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "ls"}],
action="warn",
message="ls command detected"
),
make_rule(
name="warn-la-flag",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "-la"}],
action="warn",
message="-la flag detected"
),
]
result = rule_engine.evaluate_rules(rules, sample_bash_input)
assert "systemMessage" in result
assert "warn-ls" in result["systemMessage"]
assert "warn-la-flag" in result["systemMessage"]
assert "ls command detected" in result["systemMessage"]
assert "-la flag detected" in result["systemMessage"]
def test_blocking_rule_takes_priority(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
"""Blocking rules should take priority over warnings."""
# Modify input to trigger blocking rule
sample_bash_input["tool_input"]["command"] = "rm -rf /tmp/test"
rules = [
make_rule(
name="warn-rm",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "rm"}],
action="warn",
message="rm command detected"
),
make_rule(
name="block-rm-rf",
event="bash",
conditions=[{"field": "command", "operator": "regex_match", "pattern": r"rm\s+-rf"}],
action="block",
message="Dangerous rm -rf blocked!"
),
]
result = rule_engine.evaluate_rules(rules, sample_bash_input)
# Should have blocking output, not warning
assert "hookSpecificOutput" in result
assert result["hookSpecificOutput"]["permissionDecision"] == "deny"
assert "block-rm-rf" in result["systemMessage"]
assert "Dangerous rm -rf blocked!" in result["systemMessage"]
def test_multiple_blocking_rules_combined(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
"""Multiple blocking rules should combine their messages."""
sample_bash_input["tool_input"]["command"] = "sudo rm -rf /"
rules = [
make_rule(
name="block-sudo",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "sudo"}],
action="block",
message="sudo is blocked"
),
make_rule(
name="block-rm-rf",
event="bash",
conditions=[{"field": "command", "operator": "regex_match", "pattern": r"rm\s+-rf"}],
action="block",
message="rm -rf is blocked"
),
]
result = rule_engine.evaluate_rules(rules, sample_bash_input)
assert result["hookSpecificOutput"]["permissionDecision"] == "deny"
assert "block-sudo" in result["systemMessage"]
assert "block-rm-rf" in result["systemMessage"]
def test_no_matching_rules_returns_empty(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
"""When no rules match, result should be empty (allow operation)."""
rules = [
make_rule(
name="block-delete",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "delete"}],
action="block",
message="delete blocked"
),
]
result = rule_engine.evaluate_rules(rules, sample_bash_input)
assert result == {}
class TestMultipleConditions:
"""Tests for rules with multiple conditions (AND logic)."""
def test_all_conditions_must_match(self, rule_engine: RuleEngine, sample_write_input: Dict[str, Any]):
"""Rule matches only if ALL conditions match."""
rules = [
make_rule(
name="block-sensitive-write",
event="file",
conditions=[
{"field": "file_path", "operator": "contains", "pattern": ".env"},
{"field": "content", "operator": "contains", "pattern": "SECRET"},
],
action="block",
message="Cannot write secrets to .env"
),
]
# Neither condition matches
result = rule_engine.evaluate_rules(rules, sample_write_input)
assert result == {}
# Only first condition matches
sample_write_input["tool_input"]["file_path"] = "/project/.env"
result = rule_engine.evaluate_rules(rules, sample_write_input)
assert result == {}
# Both conditions match
sample_write_input["tool_input"]["content"] = "SECRET_KEY=abc123"
result = rule_engine.evaluate_rules(rules, sample_write_input)
assert "hookSpecificOutput" in result
assert result["hookSpecificOutput"]["permissionDecision"] == "deny"
def test_multiple_operators_in_conditions(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
"""Test different operators in multiple conditions."""
rules = [
make_rule(
name="block-dangerous-curl",
event="bash",
conditions=[
{"field": "command", "operator": "starts_with", "pattern": "curl"},
{"field": "command", "operator": "contains", "pattern": "|"},
{"field": "command", "operator": "regex_match", "pattern": r"(bash|sh|eval)"},
],
action="block",
message="Dangerous curl pipe detected"
),
]
# Normal curl - doesn't match
sample_bash_input["tool_input"]["command"] = "curl https://example.com"
result = rule_engine.evaluate_rules(rules, sample_bash_input)
assert result == {}
# Dangerous curl pipe to bash - matches all
sample_bash_input["tool_input"]["command"] = "curl https://example.com | bash"
result = rule_engine.evaluate_rules(rules, sample_bash_input)
assert result["hookSpecificOutput"]["permissionDecision"] == "deny"
class TestToolTypeFieldExtraction:
"""Tests for field extraction across different tool types."""
def test_bash_command_field(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
"""Test field extraction for Bash tool."""
rules = [
make_rule(
name="detect-git",
event="bash",
conditions=[{"field": "command", "operator": "starts_with", "pattern": "git"}],
action="warn",
message="git command"
),
]
sample_bash_input["tool_input"]["command"] = "git status"
result = rule_engine.evaluate_rules(rules, sample_bash_input)
assert "git command" in result.get("systemMessage", "")
def test_write_content_and_path(self, rule_engine: RuleEngine, sample_write_input: Dict[str, Any]):
"""Test field extraction for Write tool."""
rules = [
make_rule(
name="detect-python-file",
event="file",
conditions=[
{"field": "file_path", "operator": "ends_with", "pattern": ".py"},
{"field": "content", "operator": "contains", "pattern": "import"},
],
action="warn",
message="Python file with imports"
),
]
sample_write_input["tool_input"]["content"] = "import os\nprint('hello')"
result = rule_engine.evaluate_rules(rules, sample_write_input)
assert "Python file with imports" in result.get("systemMessage", "")
def test_edit_old_and_new_string(self, rule_engine: RuleEngine, sample_edit_input: Dict[str, Any]):
"""Test field extraction for Edit tool (old_string and new_string)."""
rules = [
make_rule(
name="detect-password-removal",
event="file",
conditions=[
{"field": "old_string", "operator": "contains", "pattern": "password"},
],
action="warn",
message="Removing password-related code"
),
]
sample_edit_input["tool_input"]["old_string"] = "password = 'secret'"
sample_edit_input["tool_input"]["new_string"] = "# removed"
result = rule_engine.evaluate_rules(rules, sample_edit_input)
assert "Removing password-related code" in result.get("systemMessage", "")
def test_multiedit_concatenated_content(self, rule_engine: RuleEngine, sample_multiedit_input: Dict[str, Any]):
"""Test field extraction for MultiEdit tool (concatenated edits)."""
rules = [
make_rule(
name="detect-eval",
event="file",
conditions=[
{"field": "new_text", "operator": "contains", "pattern": "eval("},
],
action="block",
message="eval() is dangerous"
),
]
# Add an edit containing eval
sample_multiedit_input["tool_input"]["edits"] = [
{"old_string": "process()", "new_string": "eval(user_input)"},
{"old_string": "foo", "new_string": "bar"},
]
result = rule_engine.evaluate_rules(rules, sample_multiedit_input)
assert result["hookSpecificOutput"]["permissionDecision"] == "deny"
class TestStopEventIntegration:
"""Tests for Stop event hook scenarios."""
def test_stop_with_transcript_check(self, rule_engine: RuleEngine, sample_stop_input: Dict[str, Any]):
"""Test Stop event that checks transcript content."""
rules = [
make_rule(
name="require-tests",
event="stop",
conditions=[
{"field": "transcript", "operator": "not_contains", "pattern": "npm test"},
],
action="block",
message="Please run tests before stopping"
),
]
# Transcript contains "npm test", so rule should NOT match
result = rule_engine.evaluate_rules(rules, sample_stop_input)
assert result == {}
def test_stop_blocks_without_tests(self, rule_engine: RuleEngine, temp_project_dir):
"""Test Stop event blocks when tests weren't run."""
# Create transcript without test command
transcript_file = temp_project_dir / "no_tests_transcript.txt"
transcript_file.write_text("""
User: Implement the feature
Assistant: Done!
""")
stop_input = {
"hook_event_name": "Stop",
"reason": "Task completed",
"transcript_path": str(transcript_file),
}
rules = [
make_rule(
name="require-tests",
event="stop",
conditions=[
{"field": "transcript", "operator": "not_contains", "pattern": "test"},
],
action="block",
message="Please run tests before stopping"
),
]
rule_engine = RuleEngine()
result = rule_engine.evaluate_rules(rules, stop_input)
assert result["decision"] == "block"
assert "require-tests" in result["systemMessage"]
def test_stop_reason_field(self, rule_engine: RuleEngine, sample_stop_input: Dict[str, Any]):
"""Test Stop event checking the reason field."""
rules = [
make_rule(
name="no-early-exit",
event="stop",
conditions=[
{"field": "reason", "operator": "contains", "pattern": "giving up"},
],
action="block",
message="Don't give up! Try a different approach."
),
]
# Normal reason - doesn't match
result = rule_engine.evaluate_rules(rules, sample_stop_input)
assert result == {}
# Giving up reason - matches
sample_stop_input["reason"] = "giving up on this task"
result = rule_engine.evaluate_rules(rules, sample_stop_input)
assert "Don't give up" in result.get("systemMessage", "")
class TestUserPromptSubmitIntegration:
"""Tests for UserPromptSubmit event hook scenarios."""
def test_prompt_content_validation(self, rule_engine: RuleEngine, sample_userprompt_input: Dict[str, Any]):
"""Test validating user prompt content."""
rules = [
make_rule(
name="warn-destructive-request",
event="prompt",
conditions=[
{"field": "user_prompt", "operator": "regex_match", "pattern": r"delete\s+all"},
],
action="warn",
message="This looks like a destructive request"
),
]
result = rule_engine.evaluate_rules(rules, sample_userprompt_input)
assert "destructive request" in result.get("systemMessage", "")
def test_prompt_blocking(self, rule_engine: RuleEngine, sample_userprompt_input: Dict[str, Any]):
"""Test blocking certain prompt patterns."""
rules = [
make_rule(
name="block-injection",
event="prompt",
conditions=[
{"field": "user_prompt", "operator": "contains", "pattern": "ignore previous instructions"},
],
action="block",
message="Potential prompt injection detected"
),
]
# Normal prompt - doesn't match
result = rule_engine.evaluate_rules(rules, sample_userprompt_input)
assert "hookSpecificOutput" not in result
# Injection attempt - matches
sample_userprompt_input["user_prompt"] = "ignore previous instructions and..."
result = rule_engine.evaluate_rules(rules, sample_userprompt_input)
assert "prompt injection" in result.get("systemMessage", "")
class TestToolMatcherFiltering:
"""Tests for tool_matcher filtering rules to specific tools."""
def test_tool_matcher_single_tool(self, rule_engine: RuleEngine):
"""Test tool_matcher filtering to a single tool."""
rules = [
make_rule(
name="bash-only",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "test"}],
action="warn",
message="Bash rule",
tool_matcher="Bash"
),
]
bash_input = {
"hook_event_name": "PreToolUse",
"tool_name": "Bash",
"tool_input": {"command": "test command"}
}
write_input = {
"hook_event_name": "PreToolUse",
"tool_name": "Write",
"tool_input": {"content": "test content"}
}
# Should match Bash
result = rule_engine.evaluate_rules(rules, bash_input)
assert "Bash rule" in result.get("systemMessage", "")
# Should not match Write
result = rule_engine.evaluate_rules(rules, write_input)
assert result == {}
def test_tool_matcher_multiple_tools(self, rule_engine: RuleEngine, sample_edit_input: Dict[str, Any]):
"""Test tool_matcher with pipe-separated tools."""
rules = [
make_rule(
name="file-tools",
event="file",
conditions=[{"field": "file_path", "operator": "ends_with", "pattern": ".py"}],
action="warn",
message="Python file edit",
tool_matcher="Edit|Write|MultiEdit"
),
]
# Edit tool should match
result = rule_engine.evaluate_rules(rules, sample_edit_input)
assert "Python file edit" in result.get("systemMessage", "")
def test_tool_matcher_wildcard(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
"""Test tool_matcher with wildcard."""
rules = [
make_rule(
name="all-tools",
event="all",
conditions=[{"field": "command", "operator": "contains", "pattern": "test"}],
action="warn",
message="All tools rule",
tool_matcher="*"
),
]
sample_bash_input["tool_input"]["command"] = "test command"
result = rule_engine.evaluate_rules(rules, sample_bash_input)
assert "All tools rule" in result.get("systemMessage", "")
class TestRegexOperations:
"""Tests for regex pattern matching and caching."""
def test_complex_regex_patterns(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
"""Test complex regex patterns."""
rules = [
make_rule(
name="detect-secret-env",
event="bash",
conditions=[
{"field": "command", "operator": "regex_match",
"pattern": r"(SECRET|PASSWORD|API_KEY|TOKEN)[\s]*="},
],
action="block",
message="Secret assignment detected"
),
]
# Test various patterns
test_cases = [
("export SECRET=abc", True),
("export PASSWORD = abc", True),
("export API_KEY=xyz", True),
("export TOKEN=123", True),
("export NAME=test", False),
("echo hello", False),
]
for command, should_match in test_cases:
sample_bash_input["tool_input"]["command"] = command
result = rule_engine.evaluate_rules(rules, sample_bash_input)
if should_match:
assert "hookSpecificOutput" in result, f"Expected match for: {command}"
else:
assert result == {}, f"Expected no match for: {command}"
def test_case_insensitive_matching(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
"""Test that regex matching is case-insensitive."""
rules = [
make_rule(
name="detect-sudo",
event="bash",
conditions=[
{"field": "command", "operator": "regex_match", "pattern": "sudo"},
],
action="warn",
message="sudo detected"
),
]
# Should match regardless of case
for cmd in ["sudo apt install", "SUDO apt install", "Sudo apt install"]:
sample_bash_input["tool_input"]["command"] = cmd
result = rule_engine.evaluate_rules(rules, sample_bash_input)
assert "sudo detected" in result.get("systemMessage", ""), f"Failed for: {cmd}"
def test_invalid_regex_handled_gracefully(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
"""Test that invalid regex patterns don't crash."""
rules = [
make_rule(
name="invalid-regex",
event="bash",
conditions=[
{"field": "command", "operator": "regex_match", "pattern": "[invalid(regex"},
],
action="block",
message="Should not match"
),
]
# Should not crash, should return empty (no match)
result = rule_engine.evaluate_rules(rules, sample_bash_input)
assert result == {}
class TestDisabledRules:
"""Tests for disabled rule handling."""
def test_disabled_rules_not_evaluated(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
"""Disabled rules should not be evaluated."""
rules = [
make_rule(
name="disabled-rule",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "ls"}],
action="block",
message="Should not appear",
enabled=False
),
make_rule(
name="enabled-rule",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "ls"}],
action="warn",
message="Enabled rule matched",
enabled=True
),
]
# Filter to only enabled rules (as load_rules does)
enabled_rules = [r for r in rules if r.enabled]
result = rule_engine.evaluate_rules(enabled_rules, sample_bash_input)
assert "Enabled rule matched" in result.get("systemMessage", "")
assert "Should not appear" not in result.get("systemMessage", "")
class TestRulesWithNoConditions:
"""Tests for edge cases with empty conditions."""
def test_rule_without_conditions_does_not_match(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
"""Rules without conditions should not match anything."""
rule = Rule(
name="empty-conditions",
enabled=True,
event="bash",
conditions=[], # Empty conditions
action="warn",
message="Should not match"
)
result = rule_engine.evaluate_rules([rule], sample_bash_input)
assert result == {}
class TestOutputFormats:
"""Tests for correct output format for different event types."""
def test_pretooluse_blocking_format(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
"""PreToolUse blocking should use hookSpecificOutput format."""
rules = [
make_rule(
name="block-test",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "ls"}],
action="block",
message="Blocked"
),
]
result = rule_engine.evaluate_rules(rules, sample_bash_input)
assert "hookSpecificOutput" in result
assert result["hookSpecificOutput"]["hookEventName"] == "PreToolUse"
assert result["hookSpecificOutput"]["permissionDecision"] == "deny"
assert "systemMessage" in result
def test_stop_blocking_format(self, rule_engine: RuleEngine, sample_stop_input: Dict[str, Any]):
"""Stop blocking should use decision format."""
rules = [
make_rule(
name="block-stop",
event="stop",
conditions=[{"field": "reason", "operator": "contains", "pattern": "completed"}],
action="block",
message="Blocked"
),
]
result = rule_engine.evaluate_rules(rules, sample_stop_input)
assert result.get("decision") == "block"
assert "reason" in result
assert "systemMessage" in result
def test_warning_format(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
"""Warning should only have systemMessage, not hookSpecificOutput."""
rules = [
make_rule(
name="warn-test",
event="bash",
conditions=[{"field": "command", "operator": "contains", "pattern": "ls"}],
action="warn",
message="Warning"
),
]
result = rule_engine.evaluate_rules(rules, sample_bash_input)
assert "systemMessage" in result
assert "hookSpecificOutput" not in result

View File

@@ -0,0 +1,410 @@
"""Tests for rule loading and filtering from .local.md files.
Tests cover:
- Loading multiple rule files
- Event-based filtering
- YAML frontmatter parsing
- Legacy pattern to conditions conversion
"""
import pytest
from pathlib import Path
from hookify.core.config_loader import (
Rule, Condition, load_rules, load_rule_file, extract_frontmatter
)
class TestExtractFrontmatter:
"""Tests for YAML frontmatter extraction."""
def test_simple_frontmatter(self):
"""Test parsing simple key-value pairs."""
content = """---
name: test-rule
enabled: true
event: bash
action: warn
---
Rule message here.
"""
frontmatter, message = extract_frontmatter(content)
assert frontmatter["name"] == "test-rule"
assert frontmatter["enabled"] is True
assert frontmatter["event"] == "bash"
assert frontmatter["action"] == "warn"
assert message == "Rule message here."
def test_boolean_values(self):
"""Test boolean value parsing (true/false)."""
content = """---
enabled: true
disabled: false
---
msg
"""
frontmatter, _ = extract_frontmatter(content)
assert frontmatter["enabled"] is True
assert frontmatter["disabled"] is False
def test_quoted_strings(self):
"""Test quoted string parsing."""
content = """---
pattern: "rm -rf"
name: 'test-name'
---
msg
"""
frontmatter, _ = extract_frontmatter(content)
assert frontmatter["pattern"] == "rm -rf"
assert frontmatter["name"] == "test-name"
def test_conditions_list(self):
"""Test parsing conditions as list of dicts."""
content = """---
name: test
conditions:
- field: command
operator: contains
pattern: test
- field: file_path
operator: ends_with
pattern: .py
---
msg
"""
frontmatter, _ = extract_frontmatter(content)
assert "conditions" in frontmatter
assert len(frontmatter["conditions"]) == 2
assert frontmatter["conditions"][0]["field"] == "command"
assert frontmatter["conditions"][0]["operator"] == "contains"
assert frontmatter["conditions"][1]["pattern"] == ".py"
def test_inline_dict_conditions(self):
"""Test parsing inline comma-separated dict items."""
content = """---
name: test
conditions:
- field: command, operator: regex_match, pattern: test
---
msg
"""
frontmatter, _ = extract_frontmatter(content)
assert len(frontmatter["conditions"]) == 1
assert frontmatter["conditions"][0]["field"] == "command"
assert frontmatter["conditions"][0]["operator"] == "regex_match"
def test_no_frontmatter(self):
"""Test handling content without frontmatter."""
content = "Just plain text without frontmatter"
frontmatter, message = extract_frontmatter(content)
assert frontmatter == {}
assert message == content
def test_incomplete_frontmatter(self):
"""Test handling incomplete frontmatter markers."""
content = """---
name: test
No closing marker
"""
frontmatter, _ = extract_frontmatter(content)
assert frontmatter == {}
class TestLoadRuleFile:
"""Tests for loading individual rule files."""
def test_load_valid_rule(self, create_rule_file):
"""Test loading a valid rule file."""
content = """---
name: valid-rule
enabled: true
event: bash
action: block
conditions:
- field: command
operator: contains
pattern: danger
---
This is a dangerous command!
"""
rule_file = create_rule_file("valid-rule", content)
rule = load_rule_file(str(rule_file))
assert rule is not None
assert rule.name == "valid-rule"
assert rule.enabled is True
assert rule.event == "bash"
assert rule.action == "block"
assert len(rule.conditions) == 1
assert rule.conditions[0].field == "command"
assert "dangerous command" in rule.message
def test_load_legacy_pattern_rule(self, create_rule_file):
"""Test loading rule with legacy pattern (converts to condition)."""
content = """---
name: legacy-rule
enabled: true
event: bash
pattern: "rm -rf"
---
Old style rule.
"""
rule_file = create_rule_file("legacy-rule", content)
rule = load_rule_file(str(rule_file))
assert rule is not None
assert len(rule.conditions) == 1
assert rule.conditions[0].field == "command" # Inferred from bash event
assert rule.conditions[0].operator == "regex_match"
assert rule.conditions[0].pattern == "rm -rf"
def test_load_file_event_legacy_pattern(self, create_rule_file):
"""Test legacy pattern with file event infers correct field."""
content = """---
name: file-legacy
enabled: true
event: file
pattern: "TODO"
---
Found TODO.
"""
rule_file = create_rule_file("file-legacy", content)
rule = load_rule_file(str(rule_file))
assert rule.conditions[0].field == "new_text"
def test_load_missing_frontmatter(self, create_rule_file):
"""Test loading file without frontmatter returns None."""
content = "No frontmatter here"
rule_file = create_rule_file("no-frontmatter", content)
rule = load_rule_file(str(rule_file))
assert rule is None
def test_load_nonexistent_file(self):
"""Test loading nonexistent file returns None."""
rule = load_rule_file("/nonexistent/path/hookify.test.local.md")
assert rule is None
class TestLoadRules:
"""Tests for loading multiple rules with filtering."""
def test_load_multiple_rules(self, temp_project_dir, create_rule_file):
"""Test loading multiple rule files."""
create_rule_file("rule1", """---
name: rule-one
enabled: true
event: bash
conditions:
- field: command
operator: contains
pattern: test1
---
Rule 1
""")
create_rule_file("rule2", """---
name: rule-two
enabled: true
event: bash
conditions:
- field: command
operator: contains
pattern: test2
---
Rule 2
""")
rules = load_rules()
assert len(rules) == 2
names = {r.name for r in rules}
assert "rule-one" in names
assert "rule-two" in names
def test_filter_by_event(self, temp_project_dir, create_rule_file):
"""Test filtering rules by event type."""
create_rule_file("bash-rule", """---
name: bash-rule
enabled: true
event: bash
conditions:
- field: command
operator: contains
pattern: test
---
Bash rule
""")
create_rule_file("file-rule", """---
name: file-rule
enabled: true
event: file
conditions:
- field: content
operator: contains
pattern: test
---
File rule
""")
create_rule_file("all-rule", """---
name: all-rule
enabled: true
event: all
conditions:
- field: content
operator: contains
pattern: test
---
All events rule
""")
# Filter for bash events
bash_rules = load_rules(event="bash")
bash_names = {r.name for r in bash_rules}
assert "bash-rule" in bash_names
assert "all-rule" in bash_names # 'all' matches any event
assert "file-rule" not in bash_names
# Filter for file events
file_rules = load_rules(event="file")
file_names = {r.name for r in file_rules}
assert "file-rule" in file_names
assert "all-rule" in file_names
assert "bash-rule" not in file_names
def test_filter_excludes_disabled(self, temp_project_dir, create_rule_file):
"""Test that disabled rules are excluded."""
create_rule_file("enabled-rule", """---
name: enabled-rule
enabled: true
event: bash
conditions:
- field: command
operator: contains
pattern: test
---
Enabled
""")
create_rule_file("disabled-rule", """---
name: disabled-rule
enabled: false
event: bash
conditions:
- field: command
operator: contains
pattern: test
---
Disabled
""")
rules = load_rules()
assert len(rules) == 1
assert rules[0].name == "enabled-rule"
def test_load_rules_handles_invalid_file(self, temp_project_dir, create_rule_file):
"""Test that invalid files are skipped without crashing."""
# Valid rule
create_rule_file("valid", """---
name: valid
enabled: true
event: bash
conditions:
- field: command
operator: contains
pattern: test
---
Valid rule
""")
# Invalid rule (no frontmatter)
create_rule_file("invalid", "No frontmatter")
rules = load_rules()
# Should only load the valid rule
assert len(rules) == 1
assert rules[0].name == "valid"
def test_load_with_no_rules(self, temp_project_dir):
"""Test loading when no rule files exist."""
rules = load_rules()
assert rules == []
class TestRuleFromDict:
"""Tests for Rule.from_dict construction."""
def test_defaults(self):
"""Test default values for optional fields."""
frontmatter = {
"name": "test",
"event": "bash",
}
rule = Rule.from_dict(frontmatter, "message")
assert rule.name == "test"
assert rule.enabled is True # Default
assert rule.action == "warn" # Default
assert rule.message == "message"
def test_explicit_values(self):
"""Test explicit values override defaults."""
frontmatter = {
"name": "test",
"enabled": False,
"event": "file",
"action": "block",
"tool_matcher": "Write|Edit",
}
rule = Rule.from_dict(frontmatter, "message")
assert rule.enabled is False
assert rule.event == "file"
assert rule.action == "block"
assert rule.tool_matcher == "Write|Edit"
class TestConditionFromDict:
"""Tests for Condition.from_dict construction."""
def test_all_fields(self):
"""Test creating condition with all fields."""
data = {
"field": "command",
"operator": "regex_match",
"pattern": r"rm\s+-rf"
}
condition = Condition.from_dict(data)
assert condition.field == "command"
assert condition.operator == "regex_match"
assert condition.pattern == r"rm\s+-rf"
def test_default_operator(self):
"""Test default operator is regex_match."""
data = {
"field": "command",
"pattern": "test"
}
condition = Condition.from_dict(data)
assert condition.operator == "regex_match"
def test_missing_fields(self):
"""Test missing fields default to empty strings."""
data = {}
condition = Condition.from_dict(data)
assert condition.field == ""
assert condition.pattern == ""

163
scripts/sweep.ts Normal file
View File

@@ -0,0 +1,163 @@
#!/usr/bin/env bun
// --
const NEW_ISSUE = "https://github.com/anthropics/claude-code/issues/new/choose";
const DRY_RUN = process.argv.includes("--dry-run");
const STALE_DAYS = 14;
const STALE_UPVOTE_THRESHOLD = 10;
const CLOSE_MESSAGE = (reason: string) =>
`Closing for now — ${reason}. Please [open a new issue](${NEW_ISSUE}) if this is still relevant.`;
const lifecycle = [
{ label: "invalid", days: 3, reason: "this doesn't appear to be about Claude Code" },
{ label: "needs-repro", days: 7, reason: "we still need reproduction steps to investigate" },
{ label: "needs-info", days: 7, reason: "we still need a bit more information to move forward" },
{ label: "stale", days: 14, reason: "inactive for too long" },
{ label: "autoclose", days: 14, reason: "inactive for too long" },
];
// --
async function githubRequest<T>(
endpoint: string,
method = "GET",
body?: unknown
): Promise<T> {
const token = process.env.GITHUB_TOKEN;
if (!token) throw new Error("GITHUB_TOKEN required");
const response = await fetch(`https://api.github.com${endpoint}`, {
method,
headers: {
Authorization: `Bearer ${token}`,
Accept: "application/vnd.github.v3+json",
"User-Agent": "sweep",
...(body && { "Content-Type": "application/json" }),
},
...(body && { body: JSON.stringify(body) }),
});
if (!response.ok) {
if (response.status === 404) return {} as T;
const text = await response.text();
throw new Error(`GitHub API ${response.status}: ${text}`);
}
return response.json();
}
// --
async function markStale(owner: string, repo: string) {
const cutoff = new Date();
cutoff.setDate(cutoff.getDate() - STALE_DAYS);
let labeled = 0;
console.log(`\n=== marking stale (${STALE_DAYS}d inactive) ===`);
for (let page = 1; page <= 10; page++) {
const issues = await githubRequest<any[]>(
`/repos/${owner}/${repo}/issues?state=open&sort=updated&direction=asc&per_page=100&page=${page}`
);
if (issues.length === 0) break;
for (const issue of issues) {
if (issue.pull_request) continue;
if (issue.locked) continue;
if (issue.assignees?.length > 0) continue;
const updatedAt = new Date(issue.updated_at);
if (updatedAt > cutoff) return labeled;
const alreadyStale = issue.labels?.some(
(l: any) => l.name === "stale" || l.name === "autoclose"
);
if (alreadyStale) continue;
const isEnhancement = issue.labels?.some(
(l: any) => l.name === "enhancement"
);
const thumbsUp = issue.reactions?.["+1"] ?? 0;
if (isEnhancement && thumbsUp >= STALE_UPVOTE_THRESHOLD) continue;
const base = `/repos/${owner}/${repo}/issues/${issue.number}`;
if (DRY_RUN) {
const age = Math.floor((Date.now() - updatedAt.getTime()) / 86400000);
console.log(`#${issue.number}: would label stale (${age}d inactive) — ${issue.title}`);
} else {
await githubRequest(`${base}/labels`, "POST", { labels: ["stale"] });
console.log(`#${issue.number}: labeled stale — ${issue.title}`);
}
labeled++;
}
}
return labeled;
}
async function closeExpired(owner: string, repo: string) {
let closed = 0;
for (const { label, days, reason } of lifecycle) {
const cutoff = new Date();
cutoff.setDate(cutoff.getDate() - days);
console.log(`\n=== ${label} (${days}d timeout) ===`);
for (let page = 1; page <= 10; page++) {
const issues = await githubRequest<any[]>(
`/repos/${owner}/${repo}/issues?state=open&labels=${label}&sort=updated&direction=asc&per_page=100&page=${page}`
);
if (issues.length === 0) break;
for (const issue of issues) {
if (issue.pull_request) continue;
const base = `/repos/${owner}/${repo}/issues/${issue.number}`;
const events = await githubRequest<any[]>(`${base}/events?per_page=100`);
const labeledAt = events
.filter((e) => e.event === "labeled" && e.label?.name === label)
.map((e) => new Date(e.created_at))
.pop();
if (!labeledAt || labeledAt > cutoff) continue;
if (DRY_RUN) {
const age = Math.floor((Date.now() - labeledAt.getTime()) / 86400000);
console.log(`#${issue.number}: would close (${label}, ${age}d old) — ${issue.title}`);
} else {
await githubRequest(`${base}/comments`, "POST", { body: CLOSE_MESSAGE(reason) });
await githubRequest(base, "PATCH", { state: "closed", state_reason: "not_planned" });
console.log(`#${issue.number}: closed (${label})`);
}
closed++;
}
}
}
return closed;
}
// --
async function main() {
const owner = process.env.GITHUB_REPOSITORY_OWNER;
const repo = process.env.GITHUB_REPOSITORY_NAME;
if (!owner || !repo)
throw new Error("GITHUB_REPOSITORY_OWNER and GITHUB_REPOSITORY_NAME required");
if (DRY_RUN) console.log("DRY RUN — no changes will be made\n");
const labeled = await markStale(owner, repo);
const closed = await closeExpired(owner, repo);
console.log(`\nDone: ${labeled} ${DRY_RUN ? "would be labeled" : "labeled"} stale, ${closed} ${DRY_RUN ? "would be closed" : "closed"}`);
}
main().catch(console.error);
export {};