mirror of
https://github.com/anthropics/claude-code.git
synced 2026-04-18 09:22:49 +00:00
Compare commits
1 Commits
v2.1.114
...
claude/sla
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f48a6223ce |
@@ -1,5 +1,5 @@
|
||||
---
|
||||
allowed-tools: Bash(./scripts/gh.sh:*), Bash(./scripts/comment-on-duplicates.sh:*)
|
||||
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(./scripts/comment-on-duplicates.sh:*)
|
||||
description: Find duplicate GitHub issues
|
||||
---
|
||||
|
||||
@@ -13,15 +13,11 @@ To do this, follow these steps precisely:
|
||||
4. Next, feed the results from #1 and #2 into another agent, so that it can filter out false positives, that are likely not actually duplicates of the original issue. If there are no duplicates remaining, do not proceed.
|
||||
5. Finally, use the comment script to post duplicates:
|
||||
```
|
||||
./scripts/comment-on-duplicates.sh --potential-duplicates <dup1> <dup2> <dup3>
|
||||
./scripts/comment-on-duplicates.sh --base-issue <issue-number> --potential-duplicates <dup1> <dup2> <dup3>
|
||||
```
|
||||
|
||||
Notes (be sure to tell this to your agents, too):
|
||||
|
||||
- Use `./scripts/gh.sh` to interact with Github, rather than web fetch or raw `gh`. Examples:
|
||||
- `./scripts/gh.sh issue view 123` — view an issue
|
||||
- `./scripts/gh.sh issue view 123 --comments` — view with comments
|
||||
- `./scripts/gh.sh issue list --state open --limit 20` — list issues
|
||||
- `./scripts/gh.sh search issues "query" --limit 10` — search for issues
|
||||
- Do not use other tools, beyond `./scripts/gh.sh` and the comment script (eg. don't use other MCP servers, file edit, etc.)
|
||||
- Use `gh` to interact with Github, rather than web fetch
|
||||
- Do not use other tools, beyond `gh` and the comment script (eg. don't use other MCP servers, file edit, etc.)
|
||||
- Make a todo list first
|
||||
|
||||
40
.claude/commands/oncall-triage.md
Normal file
40
.claude/commands/oncall-triage.md
Normal file
@@ -0,0 +1,40 @@
|
||||
---
|
||||
allowed-tools: Bash(gh issue list:*), Bash(gh issue view:*), Bash(gh issue edit:*), TodoWrite
|
||||
description: Triage GitHub issues and label critical ones for oncall
|
||||
---
|
||||
|
||||
You're an oncall triage assistant for GitHub issues. Your task is to identify critical issues that require immediate oncall attention and apply the "oncall" label.
|
||||
|
||||
Repository: anthropics/claude-code
|
||||
|
||||
Task overview:
|
||||
|
||||
1. First, get all open bugs updated in the last 3 days with at least 50 engagements:
|
||||
```bash
|
||||
gh issue list --repo anthropics/claude-code --state open --label bug --limit 1000 --json number,title,updatedAt,comments,reactions | jq -r '.[] | select((.updatedAt >= (now - 259200 | strftime("%Y-%m-%dT%H:%M:%SZ"))) and ((.comments | length) + ([.reactions[].content] | length) >= 50)) | "\(.number)"'
|
||||
```
|
||||
|
||||
2. Save the list of issue numbers and create a TODO list with ALL of them. This ensures you process every single one.
|
||||
|
||||
3. For each issue in your TODO list:
|
||||
- Use `gh issue view <number> --repo anthropics/claude-code --json title,body,labels,comments` to get full details
|
||||
- Read and understand the full issue content and comments to determine actual user impact
|
||||
- Evaluate: Is this truly blocking users from using Claude Code?
|
||||
- Consider: "crash", "stuck", "frozen", "hang", "unresponsive", "cannot use", "blocked", "broken"
|
||||
- Does it prevent core functionality? Can users work around it?
|
||||
- Be conservative - only flag issues that truly prevent users from getting work done
|
||||
|
||||
4. For issues that are truly blocking and don't already have the "oncall" label:
|
||||
- Use `gh issue edit <number> --repo anthropics/claude-code --add-label "oncall"`
|
||||
- Mark the issue as complete in your TODO list
|
||||
|
||||
5. After processing all issues, provide a summary:
|
||||
- List each issue number that received the "oncall" label
|
||||
- Include the issue title and brief reason why it qualified
|
||||
- If no issues qualified, state that clearly
|
||||
|
||||
Important:
|
||||
- Process ALL issues in your TODO list systematically
|
||||
- Don't post any comments to issues
|
||||
- Only add the "oncall" label, never remove it
|
||||
- Use individual `gh issue view` commands instead of bash for loops to avoid approval prompts
|
||||
@@ -1,74 +0,0 @@
|
||||
---
|
||||
allowed-tools: Bash(./scripts/gh.sh:*),Bash(./scripts/edit-issue-labels.sh:*)
|
||||
description: Triage GitHub issues by analyzing and applying labels
|
||||
---
|
||||
|
||||
You're an issue triage assistant. Analyze the issue and manage labels.
|
||||
|
||||
IMPORTANT: Don't post any comments or messages to the issue. Your only actions are adding or removing labels.
|
||||
|
||||
Context:
|
||||
|
||||
$ARGUMENTS
|
||||
|
||||
TOOLS:
|
||||
- `./scripts/gh.sh` — wrapper for `gh` CLI. Only supports these subcommands and flags:
|
||||
- `./scripts/gh.sh label list` — fetch all available labels
|
||||
- `./scripts/gh.sh label list --limit 100` — fetch with limit
|
||||
- `./scripts/gh.sh issue view 123` — read issue title, body, and labels
|
||||
- `./scripts/gh.sh issue view 123 --comments` — read the conversation
|
||||
- `./scripts/gh.sh issue list --state open --limit 20` — list issues
|
||||
- `./scripts/gh.sh search issues "query"` — find similar or duplicate issues
|
||||
- `./scripts/gh.sh search issues "query" --limit 10` — search with limit
|
||||
- `./scripts/edit-issue-labels.sh --add-label LABEL --remove-label LABEL` — add or remove labels (issue number is read from the workflow event)
|
||||
|
||||
TASK:
|
||||
|
||||
1. Run `./scripts/gh.sh label list` to fetch the available labels. You may ONLY use labels from this list. Never invent new labels.
|
||||
2. Run `./scripts/gh.sh issue view ISSUE_NUMBER` to read the issue details.
|
||||
3. Run `./scripts/gh.sh issue view ISSUE_NUMBER --comments` to read the conversation.
|
||||
|
||||
**If EVENT is "issues" (new issue):**
|
||||
|
||||
4. First, check if this issue is actually about Claude Code.
|
||||
- Look for Claude Code signals in the issue BODY: a `Claude Code Version` field or `claude --version` output, references to the `claude` CLI command, terminal sessions, the VS Code/JetBrains extensions, `CLAUDE.md` files, `.claude/` directories, MCP servers, Cowork, Remote Control, or the web UI at claude.ai/code. If ANY such signal is present, this IS a Claude Code issue — proceed to step 5.
|
||||
- Only if NO Claude Code signals are present: check whether a different Anthropic product (claude.ai chat, Claude Desktop/Mobile apps, the raw Anthropic API/SDK, or account billing with no CLI involvement) is the *subject* of the complaint, not merely mentioned for context. If so, apply `invalid` and stop. If ambiguous, proceed to step 5 WITHOUT applying `invalid`.
|
||||
- The body text is authoritative. If a form dropdown (e.g. Platform) contradicts evidence in the body, trust the body — dropdowns are often mis-selected.
|
||||
|
||||
5. Analyze and apply category labels:
|
||||
- Type (bug, enhancement, question, etc.)
|
||||
- Technical areas and platform
|
||||
- Check for duplicates with `./scripts/gh.sh search issues`. Only mark as duplicate of OPEN issues.
|
||||
|
||||
6. Evaluate lifecycle labels:
|
||||
- `needs-repro` (bugs only, 7 days): Bug reports without clear steps to reproduce. A good repro has specific, followable steps that someone else could use to see the same issue.
|
||||
Do NOT apply if the user already provided error messages, logs, file paths, or a description of what they did. Don't require a specific format — narrative descriptions count.
|
||||
For model behavior issues (e.g. "Claude does X when it should do Y"), don't require traditional repro steps — examples and patterns are sufficient.
|
||||
- `needs-info` (bugs only, 7 days): The issue needs something from the community before it can progress — e.g. error messages, versions, environment details, or answers to follow-up questions. Don't apply to questions or enhancements.
|
||||
Do NOT apply if the user already provided version, environment, and error details. If the issue just needs engineering investigation, that's not `needs-info`.
|
||||
|
||||
Issues with these labels are automatically closed after the timeout if there's no response.
|
||||
The goal is to avoid issues lingering without a clear next step.
|
||||
|
||||
7. Apply all selected labels:
|
||||
`./scripts/edit-issue-labels.sh --add-label "label1" --add-label "label2"`
|
||||
|
||||
**If EVENT is "issue_comment" (comment on existing issue):**
|
||||
|
||||
4. Evaluate lifecycle labels based on the full conversation:
|
||||
- If the issue has `stale` or `autoclose`, remove the label — a new human comment means the issue is still active:
|
||||
`./scripts/edit-issue-labels.sh --remove-label "stale" --remove-label "autoclose"`
|
||||
- If the issue has `needs-repro` or `needs-info` and the missing information has now been provided, remove the label:
|
||||
`./scripts/edit-issue-labels.sh --remove-label "needs-repro"`
|
||||
- If the issue doesn't have lifecycle labels but clearly needs them (e.g., a maintainer asked for repro steps or more details), add the appropriate label.
|
||||
- Comments like "+1", "me too", "same here", or emoji reactions are NOT the missing information. Only remove `needs-repro` or `needs-info` when substantive details are actually provided.
|
||||
- Do NOT add or remove category labels (bug, enhancement, etc.) on comment events.
|
||||
|
||||
GUIDELINES:
|
||||
- ONLY use labels from `./scripts/gh.sh label list` — never create or guess label names
|
||||
- DO NOT post any comments to the issue
|
||||
- Be conservative with lifecycle labels — only apply when clearly warranted
|
||||
- Only apply lifecycle labels (`needs-repro`, `needs-info`) to bugs — never to questions or enhancements
|
||||
- When in doubt, don't apply a lifecycle label — false positives are worse than missing labels
|
||||
- On new issues (EVENT "issues"), always apply exactly one of `bug`, `enhancement`, `question`, `invalid`, or `duplicate`. If unsure, pick the closest fit — an imperfect category label is better than none.
|
||||
- On comment events, it's okay to make no changes if nothing applies.
|
||||
2
.github/workflows/claude-dedupe-issues.yml
vendored
2
.github/workflows/claude-dedupe-issues.yml
vendored
@@ -17,6 +17,7 @@ jobs:
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
id-token: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
@@ -26,7 +27,6 @@ jobs:
|
||||
uses: anthropics/claude-code-action@v1
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
CLAUDE_CODE_SCRIPT_CAPS: '{"comment-on-duplicates.sh":1}'
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
allowed_non_write_users: "*"
|
||||
|
||||
71
.github/workflows/claude-issue-triage.yml
vendored
71
.github/workflows/claude-issue-triage.yml
vendored
@@ -18,6 +18,7 @@ jobs:
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
id-token: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
@@ -28,12 +29,76 @@ jobs:
|
||||
uses: anthropics/claude-code-action@v1
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GH_REPO: ${{ github.repository }}
|
||||
CLAUDE_CODE_SCRIPT_CAPS: '{"edit-issue-labels.sh":2}'
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
allowed_non_write_users: "*"
|
||||
prompt: "/triage-issue REPO: ${{ github.repository }} ISSUE_NUMBER: ${{ github.event.issue.number }} EVENT: ${{ github.event_name }}"
|
||||
prompt: |
|
||||
You're an issue triage assistant. Analyze the issue and manage labels.
|
||||
|
||||
IMPORTANT: Don't post any comments or messages to the issue. Your only actions are adding or removing labels.
|
||||
|
||||
Context:
|
||||
- REPO: ${{ github.repository }}
|
||||
- ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||
- EVENT: ${{ github.event_name }}
|
||||
|
||||
ALLOWED LABELS — you may ONLY use labels from this list. Never invent new labels.
|
||||
|
||||
Type: bug, enhancement, question, documentation, duplicate, invalid
|
||||
Lifecycle: needs-repro, needs-info
|
||||
Platform: platform:linux, platform:macos, platform:windows, platform:wsl, platform:ios, platform:android, platform:vscode, platform:intellij, platform:web, platform:aws-bedrock
|
||||
API: api:bedrock, api:vertex
|
||||
|
||||
TOOLS:
|
||||
- `gh issue view NUMBER`: Read the issue title, body, and labels
|
||||
- `gh issue view NUMBER --comments`: Read the conversation
|
||||
- `gh search issues QUERY`: Find similar or duplicate issues
|
||||
- `gh issue edit NUMBER --add-label` / `--remove-label`: Add or remove labels
|
||||
|
||||
TASK:
|
||||
|
||||
1. Run `gh issue view ${{ github.event.issue.number }}` to read the issue details.
|
||||
2. Run `gh issue view ${{ github.event.issue.number }} --comments` to read the conversation.
|
||||
|
||||
**If EVENT is "issues" (new issue):**
|
||||
|
||||
3. First, check if this issue is actually about Claude Code (the CLI/IDE tool). Issues about the Claude API, claude.ai, the Claude app, Anthropic billing, or other Anthropic products should be labeled `invalid`. If invalid, apply only that label and stop.
|
||||
|
||||
4. Analyze and apply category labels:
|
||||
- Type (bug, enhancement, question, etc.)
|
||||
- Technical areas and platform
|
||||
- Check for duplicates with `gh search issues`. Only mark as duplicate of OPEN issues.
|
||||
|
||||
5. Evaluate lifecycle labels:
|
||||
- `needs-repro` (bugs only, 7 days): Bug reports without clear steps to reproduce. A good repro has specific, followable steps that someone else could use to see the same issue.
|
||||
Do NOT apply if the user already provided error messages, logs, file paths, or a description of what they did. Don't require a specific format — narrative descriptions count.
|
||||
For model behavior issues (e.g. "Claude does X when it should do Y"), don't require traditional repro steps — examples and patterns are sufficient.
|
||||
- `needs-info` (bugs only, 7 days): The issue needs something from the community before it can progress — e.g. error messages, versions, environment details, or answers to follow-up questions. Don't apply to questions or enhancements.
|
||||
Do NOT apply if the user already provided version, environment, and error details. If the issue just needs engineering investigation, that's not `needs-info`.
|
||||
|
||||
Issues with these labels are automatically closed after the timeout if there's no response.
|
||||
The goal is to avoid issues lingering without a clear next step.
|
||||
|
||||
6. Apply all selected labels:
|
||||
`gh issue edit ${{ github.event.issue.number }} --add-label "label1" --add-label "label2"`
|
||||
|
||||
**If EVENT is "issue_comment" (comment on existing issue):**
|
||||
|
||||
3. Evaluate lifecycle labels based on the full conversation:
|
||||
- If the issue has `needs-repro` or `needs-info` and the missing information has now been provided, remove the label:
|
||||
`gh issue edit ${{ github.event.issue.number }} --remove-label "needs-repro"`
|
||||
- If the issue doesn't have lifecycle labels but clearly needs them (e.g., a maintainer asked for repro steps or more details), add the appropriate label.
|
||||
- Comments like "+1", "me too", "same here", or emoji reactions are NOT the missing information. Only remove labels when substantive details are actually provided.
|
||||
- Do NOT add or remove category labels (bug, enhancement, etc.) on comment events.
|
||||
|
||||
GUIDELINES:
|
||||
- ONLY use labels from the ALLOWED LABELS list above — never create or guess label names
|
||||
- DO NOT post any comments to the issue
|
||||
- Be conservative with lifecycle labels — only apply when clearly warranted
|
||||
- Only apply lifecycle labels (`needs-repro`, `needs-info`) to bugs — never to questions or enhancements
|
||||
- When in doubt, don't apply a lifecycle label — false positives are worse than missing labels
|
||||
- It's okay to not add any labels if none are clearly applicable
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_args: |
|
||||
--model claude-opus-4-6
|
||||
--allowedTools "Bash(gh issue view:*),Bash(gh issue edit:*),Bash(gh search issues:*)"
|
||||
|
||||
27
.github/workflows/issue-lifecycle-comment.yml
vendored
27
.github/workflows/issue-lifecycle-comment.yml
vendored
@@ -1,27 +0,0 @@
|
||||
name: "Issue Lifecycle Comment"
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [labeled]
|
||||
|
||||
permissions:
|
||||
issues: write
|
||||
|
||||
jobs:
|
||||
comment:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
with:
|
||||
bun-version: latest
|
||||
|
||||
- name: Post lifecycle comment
|
||||
run: bun run scripts/lifecycle-comment.ts
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
LABEL: ${{ github.event.label.name }}
|
||||
ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||
47
.github/workflows/non-write-users-check.yml
vendored
47
.github/workflows/non-write-users-check.yml
vendored
@@ -1,47 +0,0 @@
|
||||
name: Non-write Users Check
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- ".github/**"
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
allowed-non-write-check:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
steps:
|
||||
- run: |
|
||||
DIFF=$(gh pr diff "$PR_NUMBER" -R "$REPO" || true)
|
||||
|
||||
if ! echo "$DIFF" | grep -qE '^diff --git a/\.github/.*\.ya?ml'; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
MATCHES=$(echo "$DIFF" | grep "^+.*allowed_non_write_users" || true)
|
||||
|
||||
if [ -z "$MATCHES" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
EXISTING=$(gh pr view "$PR_NUMBER" -R "$REPO" --json comments --jq '.comments[].body' \
|
||||
| grep -c "<!-- non-write-users-check -->" || true)
|
||||
|
||||
if [ "$EXISTING" -gt 0 ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
gh pr comment "$PR_NUMBER" -R "$REPO" --body '<!-- non-write-users-check -->
|
||||
**`allowed_non_write_users` detected**
|
||||
|
||||
This PR adds or modifies `allowed_non_write_users`, which allows users without write access to trigger Claude Code Action workflows. This can introduce security risks.
|
||||
|
||||
If this is a new flow, please make sure you actually need `allowed_non_write_users`. If you are editing an existing workflow, double check that you are not adding new Claude permissions which might lead to a vulnerability.
|
||||
|
||||
See existing workflows in this repo for safe usage examples, or contact the AppSec team.'
|
||||
env:
|
||||
PR_NUMBER: ${{ github.event.pull_request.number }}
|
||||
REPO: ${{ github.repository }}
|
||||
118
.github/workflows/oncall-triage.yml
vendored
Normal file
118
.github/workflows/oncall-triage.yml
vendored
Normal file
@@ -0,0 +1,118 @@
|
||||
name: Oncall Issue Triage
|
||||
description: Automatically identify and label critical blocking issues requiring oncall attention
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- add-oncall-triage-workflow # Temporary: for testing only
|
||||
schedule:
|
||||
# Run every 6 hours
|
||||
- cron: '0 */6 * * *'
|
||||
workflow_dispatch: # Allow manual trigger
|
||||
|
||||
jobs:
|
||||
oncall-triage:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 15
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
id-token: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup GitHub MCP Server
|
||||
run: |
|
||||
mkdir -p /tmp/mcp-config
|
||||
cat > /tmp/mcp-config/mcp-servers.json << 'EOF'
|
||||
{
|
||||
"mcpServers": {
|
||||
"github": {
|
||||
"command": "docker",
|
||||
"args": [
|
||||
"run",
|
||||
"-i",
|
||||
"--rm",
|
||||
"-e",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN",
|
||||
"ghcr.io/github/github-mcp-server:sha-7aced2b"
|
||||
],
|
||||
"env": {
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
- name: Run Claude Code for Oncall Triage
|
||||
timeout-minutes: 10
|
||||
uses: anthropics/claude-code-action@v1
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
allowed_non_write_users: "*"
|
||||
prompt: |
|
||||
You're an oncall triage assistant for GitHub issues. Your task is to identify critical issues that require immediate oncall attention.
|
||||
|
||||
Important: Don't post any comments or messages to the issues. Your only action should be to apply the "oncall" label to qualifying issues.
|
||||
|
||||
Repository: ${{ github.repository }}
|
||||
|
||||
Task overview:
|
||||
1. Fetch all open issues updated in the last 3 days:
|
||||
- Use mcp__github__list_issues with:
|
||||
- state="open"
|
||||
- first=5 (fetch only 5 issues per page)
|
||||
- orderBy="UPDATED_AT"
|
||||
- direction="DESC"
|
||||
- This will give you the most recently updated issues first
|
||||
- For each page of results, check the updatedAt timestamp of each issue
|
||||
- Add issues updated within the last 3 days (72 hours) to your TODO list as you go
|
||||
- Keep paginating using the 'after' parameter until you encounter issues older than 3 days
|
||||
- Once you hit issues older than 3 days, you can stop fetching (no need to fetch all open issues)
|
||||
|
||||
2. Build your TODO list incrementally as you fetch:
|
||||
- As you fetch each page, immediately add qualifying issues to your TODO list
|
||||
- One TODO item per issue number (e.g., "Evaluate issue #123")
|
||||
- This allows you to start processing while still fetching more pages
|
||||
|
||||
3. For each issue in your TODO list:
|
||||
- Use mcp__github__get_issue to read the issue details (title, body, labels)
|
||||
- Use mcp__github__get_issue_comments to read all comments
|
||||
- Evaluate whether this issue needs the oncall label:
|
||||
a) Is it a bug? (has "bug" label or describes bug behavior)
|
||||
b) Does it have at least 50 engagements? (count comments + reactions)
|
||||
c) Is it truly blocking? Read and understand the full content to determine:
|
||||
- Does this prevent core functionality from working?
|
||||
- Can users work around it?
|
||||
- Consider severity indicators: "crash", "stuck", "frozen", "hang", "unresponsive", "cannot use", "blocked", "broken"
|
||||
- Be conservative - only flag issues that truly prevent users from getting work done
|
||||
|
||||
4. For issues that meet all criteria and do not already have the "oncall" label:
|
||||
- Use mcp__github__update_issue to add the "oncall" label
|
||||
- Do not post any comments
|
||||
- Do not remove any existing labels
|
||||
- Do not remove the "oncall" label from issues that already have it
|
||||
|
||||
Important guidelines:
|
||||
- Use the TODO list to track your progress through ALL candidate issues
|
||||
- Process issues efficiently - don't read every single issue upfront, work through your TODO list systematically
|
||||
- Be conservative in your assessment - only flag truly critical blocking issues
|
||||
- Do not post any comments to issues
|
||||
- Your only action should be to add the "oncall" label using mcp__github__update_issue
|
||||
- Mark each issue as complete in your TODO list as you process it
|
||||
|
||||
7. After processing all issues in your TODO list, provide a summary of your actions:
|
||||
- Total number of issues processed (candidate issues evaluated)
|
||||
- Number of issues that received the "oncall" label
|
||||
- For each issue that got the label: list issue number, title, and brief reason why it qualified
|
||||
- Close calls: List any issues that almost qualified but didn't quite meet the criteria (e.g., borderline blocking, had workarounds)
|
||||
- If no issues qualified, state that clearly
|
||||
- Format the summary clearly for easy reading
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_args: |
|
||||
--mcp-config /tmp/mcp-config/mcp-servers.json
|
||||
--allowedTools "mcp__github__list_issues,mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue"
|
||||
1465
CHANGELOG.md
1465
CHANGELOG.md
File diff suppressed because it is too large
Load Diff
@@ -1,28 +0,0 @@
|
||||
# MDM Deployment Examples
|
||||
|
||||
Example templates for deploying Claude Code [managed settings](https://code.claude.com/docs/en/settings#settings-files) through Jamf, Iru (Kandji), Intune, or Group Policy. Use these as starting points — adjust them to fit your needs.
|
||||
|
||||
All templates encode the same minimal example (`permissions.disableBypassPermissionsMode`). See the [settings reference](https://code.claude.com/docs/en/settings#available-settings) for the full list of keys, and [`../settings`](../settings) for more complete example configurations.
|
||||
|
||||
|
||||
## Templates
|
||||
|
||||
> [!WARNING]
|
||||
> These examples are community-maintained templates which may be unsupported or incorrect. You are responsible for the correctness of your own deployment configuration.
|
||||
|
||||
| File | Use with |
|
||||
| :--- | :--- |
|
||||
| [`managed-settings.json`](./managed-settings.json) | Any platform. Deploy to the [system config directory](https://code.claude.com/docs/en/settings#settings-files). |
|
||||
| [`macos/com.anthropic.claudecode.plist`](./macos/com.anthropic.claudecode.plist) | Jamf or Iru (Kandji) **Custom Settings** payload. Preference domain: `com.anthropic.claudecode`. |
|
||||
| [`macos/com.anthropic.claudecode.mobileconfig`](./macos/com.anthropic.claudecode.mobileconfig) | Full configuration profile for local testing or MDMs that take a complete profile. |
|
||||
| [`windows/Set-ClaudeCodePolicy.ps1`](./windows/Set-ClaudeCodePolicy.ps1) | Intune **Platform scripts**. Writes `managed-settings.json` to `C:\Program Files\ClaudeCode\`. |
|
||||
| [`windows/ClaudeCode.admx`](./windows/ClaudeCode.admx) + [`en-US/ClaudeCode.adml`](./windows/en-US/ClaudeCode.adml) | Group Policy or Intune **Import ADMX**. Writes `HKLM\SOFTWARE\Policies\ClaudeCode\Settings` (REG_SZ, single-line JSON). |
|
||||
|
||||
## Tips
|
||||
- Replace the placeholder `PayloadUUID` and `PayloadOrganization` values in the `.mobileconfig` with your own (`uuidgen`)
|
||||
- Before deploying to your fleet, test on a single machine and confirm `/status` lists the source under **Setting sources** — e.g. `Enterprise managed settings (plist)` on macOS or `Enterprise managed settings (HKLM)` on Windows
|
||||
- Settings deployed this way sit at the top of the precedence order and cannot be overridden by users
|
||||
|
||||
## Full Documentation
|
||||
|
||||
See https://code.claude.com/docs/en/settings#settings-files for complete documentation on managed settings and settings precedence.
|
||||
@@ -1,56 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>PayloadDisplayName</key>
|
||||
<string>Claude Code Managed Settings</string>
|
||||
<key>PayloadDescription</key>
|
||||
<string>Configures managed settings for Claude Code.</string>
|
||||
<key>PayloadIdentifier</key>
|
||||
<string>com.anthropic.claudecode.profile</string>
|
||||
<key>PayloadOrganization</key>
|
||||
<string>Example Organization</string>
|
||||
<key>PayloadScope</key>
|
||||
<string>System</string>
|
||||
<key>PayloadType</key>
|
||||
<string>Configuration</string>
|
||||
<key>PayloadUUID</key>
|
||||
<string>DC3CBC17-3330-4CDE-94AC-D2342E9C88A3</string>
|
||||
<key>PayloadVersion</key>
|
||||
<integer>1</integer>
|
||||
<key>PayloadContent</key>
|
||||
<array>
|
||||
<dict>
|
||||
<key>PayloadDisplayName</key>
|
||||
<string>Claude Code</string>
|
||||
<key>PayloadIdentifier</key>
|
||||
<string>com.anthropic.claudecode.profile.BEFD5F54-71FC-4012-82B2-94399A1E220B</string>
|
||||
<key>PayloadType</key>
|
||||
<string>com.apple.ManagedClient.preferences</string>
|
||||
<key>PayloadUUID</key>
|
||||
<string>BEFD5F54-71FC-4012-82B2-94399A1E220B</string>
|
||||
<key>PayloadVersion</key>
|
||||
<integer>1</integer>
|
||||
<key>PayloadContent</key>
|
||||
<dict>
|
||||
<key>com.anthropic.claudecode</key>
|
||||
<dict>
|
||||
<key>Forced</key>
|
||||
<array>
|
||||
<dict>
|
||||
<key>mcx_preference_settings</key>
|
||||
<dict>
|
||||
<key>permissions</key>
|
||||
<dict>
|
||||
<key>disableBypassPermissionsMode</key>
|
||||
<string>disable</string>
|
||||
</dict>
|
||||
</dict>
|
||||
</dict>
|
||||
</array>
|
||||
</dict>
|
||||
</dict>
|
||||
</dict>
|
||||
</array>
|
||||
</dict>
|
||||
</plist>
|
||||
@@ -1,11 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>permissions</key>
|
||||
<dict>
|
||||
<key>disableBypassPermissionsMode</key>
|
||||
<string>disable</string>
|
||||
</dict>
|
||||
</dict>
|
||||
</plist>
|
||||
@@ -1,5 +0,0 @@
|
||||
{
|
||||
"permissions": {
|
||||
"disableBypassPermissionsMode": "disable"
|
||||
}
|
||||
}
|
||||
@@ -1,28 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<policyDefinitions xmlns:xsd="http://www.w3.org/2001/XMLSchema"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xmlns="http://schemas.microsoft.com/GroupPolicy/2006/07/PolicyDefinitions"
|
||||
revision="1.0" schemaVersion="1.0">
|
||||
<policyNamespaces>
|
||||
<target prefix="claudecode" namespace="Anthropic.Policies.ClaudeCode" />
|
||||
<using prefix="windows" namespace="Microsoft.Policies.Windows" />
|
||||
</policyNamespaces>
|
||||
<resources minRequiredRevision="1.0" />
|
||||
<categories>
|
||||
<category name="Cat_ClaudeCode" displayName="$(string.Cat_ClaudeCode)" />
|
||||
</categories>
|
||||
<policies>
|
||||
<policy name="ManagedSettings"
|
||||
class="Machine"
|
||||
displayName="$(string.ManagedSettings)"
|
||||
explainText="$(string.ManagedSettings_Explain)"
|
||||
presentation="$(presentation.ManagedSettings)"
|
||||
key="SOFTWARE\Policies\ClaudeCode">
|
||||
<parentCategory ref="Cat_ClaudeCode" />
|
||||
<supportedOn ref="windows:SUPPORTED_Windows_10_0" />
|
||||
<elements>
|
||||
<text id="SettingsJson" valueName="Settings" maxLength="1000000" required="true" />
|
||||
</elements>
|
||||
</policy>
|
||||
</policies>
|
||||
</policyDefinitions>
|
||||
@@ -1,28 +0,0 @@
|
||||
<#
|
||||
Deploys Claude Code managed settings as a JSON file.
|
||||
|
||||
Intune: Devices > Scripts and remediations > Platform scripts > Add (Windows 10 and later).
|
||||
Run this script using the logged on credentials: No
|
||||
Run script in 64 bit PowerShell Host: Yes
|
||||
|
||||
Claude Code reads C:\Program Files\ClaudeCode\managed-settings.json at startup
|
||||
and treats it as a managed policy source. Edit the JSON below to change the
|
||||
deployed settings; see https://code.claude.com/docs/en/settings for available keys.
|
||||
#>
|
||||
|
||||
$ErrorActionPreference = 'Stop'
|
||||
|
||||
$dir = Join-Path $env:ProgramFiles 'ClaudeCode'
|
||||
New-Item -ItemType Directory -Path $dir -Force | Out-Null
|
||||
|
||||
$json = @'
|
||||
{
|
||||
"permissions": {
|
||||
"disableBypassPermissionsMode": "disable"
|
||||
}
|
||||
}
|
||||
'@
|
||||
|
||||
$path = Join-Path $dir 'managed-settings.json'
|
||||
[System.IO.File]::WriteAllText($path, $json, (New-Object System.Text.UTF8Encoding($false)))
|
||||
Write-Output "Wrote $path"
|
||||
@@ -1,31 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<policyDefinitionResources xmlns:xsd="http://www.w3.org/2001/XMLSchema"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xmlns="http://schemas.microsoft.com/GroupPolicy/2006/07/PolicyDefinitions"
|
||||
revision="1.0" schemaVersion="1.0">
|
||||
<displayName>Claude Code</displayName>
|
||||
<description>Claude Code policy settings</description>
|
||||
<resources>
|
||||
<stringTable>
|
||||
<string id="Cat_ClaudeCode">Claude Code</string>
|
||||
<string id="ManagedSettings">Managed settings (JSON)</string>
|
||||
<string id="ManagedSettings_Explain">Configures managed settings for Claude Code.
|
||||
|
||||
Enter the full settings configuration as a single line of JSON. The value is stored as a REG_SZ string at HKLM\SOFTWARE\Policies\ClaudeCode\Settings and is applied at the highest precedence; users cannot override these settings.
|
||||
|
||||
Example:
|
||||
{"permissions":{"disableBypassPermissionsMode":"disable"}}
|
||||
|
||||
For the list of available settings keys, see https://code.claude.com/docs/en/settings.
|
||||
|
||||
If your configuration is large or you prefer to manage a JSON file directly, deploy C:\Program Files\ClaudeCode\managed-settings.json instead (see Set-ClaudeCodePolicy.ps1).</string>
|
||||
</stringTable>
|
||||
<presentationTable>
|
||||
<presentation id="ManagedSettings">
|
||||
<textBox refId="SettingsJson">
|
||||
<label>Settings JSON:</label>
|
||||
</textBox>
|
||||
</presentation>
|
||||
</presentationTable>
|
||||
</resources>
|
||||
</policyDefinitionResources>
|
||||
@@ -1,6 +1,6 @@
|
||||
# Settings Examples
|
||||
|
||||
Example Claude Code settings files, primarily intended for organization-wide deployments. Use these as starting points — adjust them to fit your needs.
|
||||
Example Claude Code settings files, primarily intended for organization-wide deployments. Use these are starting points — adjust them to fit your needs.
|
||||
|
||||
These may be applied at any level of the [settings hierarchy](https://code.claude.com/docs/en/settings#settings-files), though certain properties only take effect if specified in enterprise settings (e.g. `strictKnownMarketplaces`, `allowManagedHooksOnly`, `allowManagedPermissionRulesOnly`).
|
||||
|
||||
@@ -26,10 +26,6 @@ These may be applied at any level of the [settings hierarchy](https://code.claud
|
||||
- Before deploying configuration files to your organization, test them locally by applying to `managed-settings.json`, `settings.json` or `settings.local.json`
|
||||
- The `sandbox` property only applies to the `Bash` tool; it does not apply to other tools (like Read, Write, WebSearch, WebFetch, MCPs), hooks, or internal commands
|
||||
|
||||
## Deploying via MDM
|
||||
|
||||
To distribute these settings as enterprise-managed policy through Jamf, Iru (Kandji), Intune, or Group Policy, see the deployment templates in [`../mdm`](../mdm).
|
||||
|
||||
## Full Documentation
|
||||
|
||||
See https://code.claude.com/docs/en/settings for complete documentation on all available managed settings.
|
||||
|
||||
@@ -68,7 +68,7 @@ Note: Still review Claude generated PR's.
|
||||
|
||||
8. Create a list of all comments that you plan on leaving. This is only for you to make sure you are comfortable with the comments. Do not post this list anywhere.
|
||||
|
||||
9. Post inline comments for each issue using `mcp__github_inline_comment__create_inline_comment` with `confirmed: true`. For each comment:
|
||||
9. Post inline comments for each issue using `mcp__github_inline_comment__create_inline_comment`. For each comment:
|
||||
- Provide a brief description of the issue
|
||||
- For small, self-contained fixes, include a committable suggestion block
|
||||
- For larger fixes (6+ lines, structural changes, or changes spanning multiple locations), describe the issue and suggested fix without a suggestion block
|
||||
|
||||
@@ -247,9 +247,13 @@ class RuleEngine:
|
||||
if field == 'file_path':
|
||||
return tool_input.get('file_path', '')
|
||||
elif field in ['new_text', 'content']:
|
||||
# Concatenate all edits
|
||||
# Concatenate all edits, handling malformed entries gracefully
|
||||
edits = tool_input.get('edits', [])
|
||||
return ' '.join(e.get('new_string', '') for e in edits)
|
||||
parts = []
|
||||
for e in edits:
|
||||
if isinstance(e, dict):
|
||||
parts.append(e.get('new_string', ''))
|
||||
return ' '.join(parts)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
1
plugins/hookify/tests/__init__.py
Normal file
1
plugins/hookify/tests/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Hookify integration tests."""
|
||||
208
plugins/hookify/tests/conftest.py
Normal file
208
plugins/hookify/tests/conftest.py
Normal file
@@ -0,0 +1,208 @@
|
||||
"""Pytest fixtures for hookify integration tests."""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import tempfile
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
from typing import Generator, Dict, Any, List
|
||||
|
||||
import pytest
|
||||
|
||||
# Add parent directories to path for imports
|
||||
PLUGIN_ROOT = Path(__file__).parent.parent
|
||||
PLUGINS_DIR = PLUGIN_ROOT.parent
|
||||
sys.path.insert(0, str(PLUGINS_DIR))
|
||||
sys.path.insert(0, str(PLUGIN_ROOT))
|
||||
|
||||
from hookify.core.config_loader import Rule, Condition, load_rules, extract_frontmatter
|
||||
from hookify.core.rule_engine import RuleEngine
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def rule_engine() -> RuleEngine:
|
||||
"""Create a RuleEngine instance."""
|
||||
return RuleEngine()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def temp_project_dir() -> Generator[Path, None, None]:
|
||||
"""Create a temporary project directory with .claude folder.
|
||||
|
||||
This fixture creates a clean temp directory and changes to it,
|
||||
then restores the original directory after the test.
|
||||
"""
|
||||
original_dir = os.getcwd()
|
||||
temp_dir = tempfile.mkdtemp(prefix="hookify_test_")
|
||||
|
||||
# Create .claude directory for rule files
|
||||
claude_dir = Path(temp_dir) / ".claude"
|
||||
claude_dir.mkdir()
|
||||
|
||||
os.chdir(temp_dir)
|
||||
|
||||
yield Path(temp_dir)
|
||||
|
||||
os.chdir(original_dir)
|
||||
shutil.rmtree(temp_dir)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_rule_file(temp_project_dir: Path) -> Path:
|
||||
"""Create a sample rule file for testing."""
|
||||
rule_content = """---
|
||||
name: block-rm-rf
|
||||
enabled: true
|
||||
event: bash
|
||||
action: block
|
||||
conditions:
|
||||
- field: command
|
||||
operator: regex_match
|
||||
pattern: rm\\s+-rf
|
||||
---
|
||||
|
||||
**Dangerous command blocked!**
|
||||
|
||||
The `rm -rf` command can permanently delete files. Please use safer alternatives.
|
||||
"""
|
||||
rule_file = temp_project_dir / ".claude" / "hookify.dangerous-commands.local.md"
|
||||
rule_file.write_text(rule_content)
|
||||
return rule_file
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def create_rule_file(temp_project_dir: Path):
|
||||
"""Factory fixture to create rule files with custom content."""
|
||||
def _create(name: str, content: str) -> Path:
|
||||
rule_file = temp_project_dir / ".claude" / f"hookify.{name}.local.md"
|
||||
rule_file.write_text(content)
|
||||
return rule_file
|
||||
return _create
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_bash_input() -> Dict[str, Any]:
|
||||
"""Sample PreToolUse input for Bash tool."""
|
||||
return {
|
||||
"session_id": "test-session-123",
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Bash",
|
||||
"tool_input": {
|
||||
"command": "ls -la"
|
||||
},
|
||||
"cwd": "/test/project"
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_write_input() -> Dict[str, Any]:
|
||||
"""Sample PreToolUse input for Write tool."""
|
||||
return {
|
||||
"session_id": "test-session-123",
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Write",
|
||||
"tool_input": {
|
||||
"file_path": "/test/project/src/main.py",
|
||||
"content": "print('hello world')"
|
||||
},
|
||||
"cwd": "/test/project"
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_edit_input() -> Dict[str, Any]:
|
||||
"""Sample PreToolUse input for Edit tool."""
|
||||
return {
|
||||
"session_id": "test-session-123",
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Edit",
|
||||
"tool_input": {
|
||||
"file_path": "/test/project/src/main.py",
|
||||
"old_string": "hello",
|
||||
"new_string": "goodbye"
|
||||
},
|
||||
"cwd": "/test/project"
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_multiedit_input() -> Dict[str, Any]:
|
||||
"""Sample PreToolUse input for MultiEdit tool."""
|
||||
return {
|
||||
"session_id": "test-session-123",
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "MultiEdit",
|
||||
"tool_input": {
|
||||
"file_path": "/test/project/src/main.py",
|
||||
"edits": [
|
||||
{"old_string": "foo", "new_string": "bar"},
|
||||
{"old_string": "baz", "new_string": "qux"}
|
||||
]
|
||||
},
|
||||
"cwd": "/test/project"
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_stop_input(temp_project_dir: Path) -> Dict[str, Any]:
|
||||
"""Sample Stop event input with transcript file."""
|
||||
# Create a transcript file
|
||||
transcript_file = temp_project_dir / "transcript.txt"
|
||||
transcript_file.write_text("""
|
||||
User: Please implement the feature
|
||||
Assistant: I'll implement that feature now.
|
||||
[Uses Write tool to create file]
|
||||
User: Great, now run the tests
|
||||
Assistant: Running tests...
|
||||
[Uses Bash tool: npm test]
|
||||
All tests passed!
|
||||
""")
|
||||
|
||||
return {
|
||||
"session_id": "test-session-123",
|
||||
"hook_event_name": "Stop",
|
||||
"reason": "Task completed",
|
||||
"transcript_path": str(transcript_file),
|
||||
"cwd": str(temp_project_dir)
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_userprompt_input() -> Dict[str, Any]:
|
||||
"""Sample UserPromptSubmit event input."""
|
||||
return {
|
||||
"session_id": "test-session-123",
|
||||
"hook_event_name": "UserPromptSubmit",
|
||||
"user_prompt": "Please delete all files in the directory",
|
||||
"cwd": "/test/project"
|
||||
}
|
||||
|
||||
|
||||
def make_rule(
|
||||
name: str,
|
||||
event: str,
|
||||
conditions: List[Dict[str, str]],
|
||||
action: str = "warn",
|
||||
message: str = "Test message",
|
||||
enabled: bool = True,
|
||||
tool_matcher: str = None
|
||||
) -> Rule:
|
||||
"""Helper function to create Rule objects for testing."""
|
||||
cond_objects = [
|
||||
Condition(
|
||||
field=c.get("field", ""),
|
||||
operator=c.get("operator", "regex_match"),
|
||||
pattern=c.get("pattern", "")
|
||||
)
|
||||
for c in conditions
|
||||
]
|
||||
return Rule(
|
||||
name=name,
|
||||
enabled=enabled,
|
||||
event=event,
|
||||
conditions=cond_objects,
|
||||
action=action,
|
||||
message=message,
|
||||
tool_matcher=tool_matcher
|
||||
)
|
||||
497
plugins/hookify/tests/test_error_handling.py
Normal file
497
plugins/hookify/tests/test_error_handling.py
Normal file
@@ -0,0 +1,497 @@
|
||||
"""Tests for error handling and fault tolerance in hookify.
|
||||
|
||||
Tests cover:
|
||||
- Graceful handling of missing files
|
||||
- Invalid JSON/YAML handling
|
||||
- Regex compilation errors
|
||||
- Transcript file access errors
|
||||
- Import failures
|
||||
- Edge cases and boundary conditions
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any
|
||||
from unittest.mock import patch, mock_open
|
||||
|
||||
from hookify.core.config_loader import load_rules, load_rule_file, extract_frontmatter
|
||||
from hookify.core.rule_engine import RuleEngine, compile_regex
|
||||
|
||||
|
||||
class TestTranscriptFileErrors:
|
||||
"""Tests for handling transcript file access errors."""
|
||||
|
||||
def test_missing_transcript_file(self, rule_engine: RuleEngine, temp_project_dir):
|
||||
"""Test handling when transcript file doesn't exist."""
|
||||
stop_input = {
|
||||
"hook_event_name": "Stop",
|
||||
"reason": "Done",
|
||||
"transcript_path": "/nonexistent/transcript.txt",
|
||||
}
|
||||
|
||||
rules = [
|
||||
_make_rule(
|
||||
name="check-transcript",
|
||||
event="stop",
|
||||
conditions=[{"field": "transcript", "operator": "contains", "pattern": "test"}],
|
||||
action="warn",
|
||||
message="Test message"
|
||||
),
|
||||
]
|
||||
|
||||
# Should not crash, transcript returns empty string
|
||||
result = rule_engine.evaluate_rules(rules, stop_input)
|
||||
# Rule shouldn't match since transcript is empty
|
||||
assert result == {}
|
||||
|
||||
def test_unreadable_transcript_file(self, rule_engine: RuleEngine, temp_project_dir):
|
||||
"""Test handling when transcript file is unreadable."""
|
||||
# Create file and remove read permissions
|
||||
transcript_file = temp_project_dir / "unreadable.txt"
|
||||
transcript_file.write_text("content")
|
||||
os.chmod(transcript_file, 0o000)
|
||||
|
||||
stop_input = {
|
||||
"hook_event_name": "Stop",
|
||||
"reason": "Done",
|
||||
"transcript_path": str(transcript_file),
|
||||
}
|
||||
|
||||
rules = [
|
||||
_make_rule(
|
||||
name="check-transcript",
|
||||
event="stop",
|
||||
conditions=[{"field": "transcript", "operator": "contains", "pattern": "test"}],
|
||||
action="warn",
|
||||
message="Test"
|
||||
),
|
||||
]
|
||||
|
||||
try:
|
||||
# Should not crash
|
||||
result = rule_engine.evaluate_rules(rules, stop_input)
|
||||
assert result == {} # No match since transcript couldn't be read
|
||||
finally:
|
||||
# Restore permissions for cleanup
|
||||
os.chmod(transcript_file, 0o644)
|
||||
|
||||
|
||||
class TestRegexErrors:
|
||||
"""Tests for regex compilation and matching errors."""
|
||||
|
||||
def test_invalid_regex_pattern(self, rule_engine: RuleEngine):
|
||||
"""Test handling of invalid regex patterns."""
|
||||
input_data = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Bash",
|
||||
"tool_input": {"command": "ls -la"}
|
||||
}
|
||||
|
||||
rules = [
|
||||
_make_rule(
|
||||
name="invalid-regex",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "regex_match", "pattern": "[unclosed"}],
|
||||
action="block",
|
||||
message="Should not match"
|
||||
),
|
||||
]
|
||||
|
||||
# Should not crash, invalid regex returns False (no match)
|
||||
result = rule_engine.evaluate_rules(rules, input_data)
|
||||
assert result == {}
|
||||
|
||||
def test_catastrophic_backtracking_regex(self, rule_engine: RuleEngine):
|
||||
"""Test handling of potentially slow regex patterns."""
|
||||
input_data = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Bash",
|
||||
"tool_input": {"command": "a" * 100}
|
||||
}
|
||||
|
||||
# This pattern could cause catastrophic backtracking in some engines
|
||||
# Python's re module handles this reasonably well
|
||||
rules = [
|
||||
_make_rule(
|
||||
name="complex-regex",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "regex_match", "pattern": "(a+)+$"}],
|
||||
action="warn",
|
||||
message="Matched"
|
||||
),
|
||||
]
|
||||
|
||||
# Should complete without hanging
|
||||
result = rule_engine.evaluate_rules(rules, input_data)
|
||||
assert "Matched" in result.get("systemMessage", "")
|
||||
|
||||
def test_regex_cache(self):
|
||||
"""Test that regex patterns are cached."""
|
||||
pattern = r"test\s+pattern"
|
||||
|
||||
# Compile same pattern twice
|
||||
regex1 = compile_regex(pattern)
|
||||
regex2 = compile_regex(pattern)
|
||||
|
||||
# Should be the same object due to caching
|
||||
assert regex1 is regex2
|
||||
|
||||
|
||||
class TestMalformedInput:
|
||||
"""Tests for handling malformed input data."""
|
||||
|
||||
def test_missing_tool_name(self, rule_engine: RuleEngine):
|
||||
"""Test handling input without tool_name."""
|
||||
input_data = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
# Missing tool_name
|
||||
"tool_input": {"command": "test"}
|
||||
}
|
||||
|
||||
rules = [
|
||||
_make_rule(
|
||||
name="test-rule",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "test"}],
|
||||
action="warn",
|
||||
message="Test"
|
||||
),
|
||||
]
|
||||
|
||||
# Should not crash
|
||||
result = rule_engine.evaluate_rules(rules, input_data)
|
||||
# May or may not match depending on implementation
|
||||
|
||||
def test_missing_tool_input(self, rule_engine: RuleEngine):
|
||||
"""Test handling input without tool_input."""
|
||||
input_data = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Bash",
|
||||
# Missing tool_input
|
||||
}
|
||||
|
||||
rules = [
|
||||
_make_rule(
|
||||
name="test-rule",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "test"}],
|
||||
action="warn",
|
||||
message="Test"
|
||||
),
|
||||
]
|
||||
|
||||
# Should not crash
|
||||
result = rule_engine.evaluate_rules(rules, input_data)
|
||||
assert result == {} # No match with missing input
|
||||
|
||||
def test_null_values_in_input(self, rule_engine: RuleEngine):
|
||||
"""Test handling None values in tool_input."""
|
||||
input_data = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Bash",
|
||||
"tool_input": {
|
||||
"command": None
|
||||
}
|
||||
}
|
||||
|
||||
rules = [
|
||||
_make_rule(
|
||||
name="test-rule",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "test"}],
|
||||
action="warn",
|
||||
message="Test"
|
||||
),
|
||||
]
|
||||
|
||||
# Should not crash
|
||||
result = rule_engine.evaluate_rules(rules, input_data)
|
||||
|
||||
def test_non_string_field_values(self, rule_engine: RuleEngine):
|
||||
"""Test handling non-string values that get converted."""
|
||||
input_data = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Bash",
|
||||
"tool_input": {
|
||||
"command": 123 # Number instead of string
|
||||
}
|
||||
}
|
||||
|
||||
rules = [
|
||||
_make_rule(
|
||||
name="test-rule",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "123"}],
|
||||
action="warn",
|
||||
message="Found number"
|
||||
),
|
||||
]
|
||||
|
||||
result = rule_engine.evaluate_rules(rules, input_data)
|
||||
# Should convert to string and match
|
||||
assert "Found number" in result.get("systemMessage", "")
|
||||
|
||||
|
||||
class TestRuleFileErrors:
|
||||
"""Tests for rule file loading errors."""
|
||||
|
||||
def test_malformed_yaml(self, create_rule_file):
|
||||
"""Test handling of malformed YAML in frontmatter."""
|
||||
content = """---
|
||||
name: test
|
||||
enabled: [unclosed bracket
|
||||
---
|
||||
message
|
||||
"""
|
||||
rule_file = create_rule_file("malformed", content)
|
||||
rule = load_rule_file(str(rule_file))
|
||||
|
||||
# Should handle gracefully (may return None or partial data)
|
||||
# The custom YAML parser is lenient
|
||||
|
||||
def test_unicode_errors(self, temp_project_dir):
|
||||
"""Test handling of files with invalid unicode."""
|
||||
rule_file = temp_project_dir / ".claude" / "hookify.unicode.local.md"
|
||||
|
||||
# Write binary content that's not valid UTF-8
|
||||
with open(rule_file, 'wb') as f:
|
||||
f.write(b"---\nname: test\n---\n\xff\xfe invalid unicode")
|
||||
|
||||
rule = load_rule_file(str(rule_file))
|
||||
assert rule is None # Should return None for encoding errors
|
||||
|
||||
def test_empty_file(self, create_rule_file):
|
||||
"""Test handling of empty rule file."""
|
||||
rule_file = create_rule_file("empty", "")
|
||||
rule = load_rule_file(str(rule_file))
|
||||
|
||||
assert rule is None
|
||||
|
||||
|
||||
class TestFieldExtractionErrors:
|
||||
"""Tests for field extraction edge cases."""
|
||||
|
||||
def test_unknown_field_name(self, rule_engine: RuleEngine):
|
||||
"""Test handling of unknown field names."""
|
||||
input_data = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Bash",
|
||||
"tool_input": {"command": "test"}
|
||||
}
|
||||
|
||||
rules = [
|
||||
_make_rule(
|
||||
name="test-rule",
|
||||
event="bash",
|
||||
conditions=[{"field": "nonexistent_field", "operator": "contains", "pattern": "test"}],
|
||||
action="warn",
|
||||
message="Test"
|
||||
),
|
||||
]
|
||||
|
||||
# Should not crash, unknown field returns None -> no match
|
||||
result = rule_engine.evaluate_rules(rules, input_data)
|
||||
assert result == {}
|
||||
|
||||
def test_multiedit_with_empty_edits(self, rule_engine: RuleEngine):
|
||||
"""Test MultiEdit tool with empty edits array."""
|
||||
input_data = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "MultiEdit",
|
||||
"tool_input": {
|
||||
"file_path": "/test/file.py",
|
||||
"edits": [] # Empty edits
|
||||
}
|
||||
}
|
||||
|
||||
rules = [
|
||||
_make_rule(
|
||||
name="test-rule",
|
||||
event="file",
|
||||
conditions=[{"field": "new_text", "operator": "contains", "pattern": "test"}],
|
||||
action="warn",
|
||||
message="Test"
|
||||
),
|
||||
]
|
||||
|
||||
# Should not crash
|
||||
result = rule_engine.evaluate_rules(rules, input_data)
|
||||
assert result == {}
|
||||
|
||||
def test_multiedit_with_malformed_edits(self, rule_engine: RuleEngine):
|
||||
"""Test MultiEdit tool with malformed edit entries."""
|
||||
input_data = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "MultiEdit",
|
||||
"tool_input": {
|
||||
"file_path": "/test/file.py",
|
||||
"edits": [
|
||||
{"invalid": "entry"}, # Missing new_string
|
||||
None, # Null entry
|
||||
"not a dict" # Wrong type
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
rules = [
|
||||
_make_rule(
|
||||
name="test-rule",
|
||||
event="file",
|
||||
conditions=[{"field": "new_text", "operator": "contains", "pattern": "test"}],
|
||||
action="warn",
|
||||
message="Test"
|
||||
),
|
||||
]
|
||||
|
||||
# Should handle gracefully
|
||||
result = rule_engine.evaluate_rules(rules, input_data)
|
||||
|
||||
|
||||
class TestOperatorEdgeCases:
|
||||
"""Tests for operator edge cases."""
|
||||
|
||||
def test_unknown_operator(self, rule_engine: RuleEngine):
|
||||
"""Test handling of unknown operator."""
|
||||
input_data = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Bash",
|
||||
"tool_input": {"command": "test"}
|
||||
}
|
||||
|
||||
rules = [
|
||||
_make_rule(
|
||||
name="test-rule",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "unknown_op", "pattern": "test"}],
|
||||
action="warn",
|
||||
message="Test"
|
||||
),
|
||||
]
|
||||
|
||||
# Unknown operator returns False -> no match
|
||||
result = rule_engine.evaluate_rules(rules, input_data)
|
||||
assert result == {}
|
||||
|
||||
def test_empty_pattern(self, rule_engine: RuleEngine):
|
||||
"""Test handling of empty pattern."""
|
||||
input_data = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Bash",
|
||||
"tool_input": {"command": "test"}
|
||||
}
|
||||
|
||||
rules = [
|
||||
_make_rule(
|
||||
name="test-rule",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": ""}],
|
||||
action="warn",
|
||||
message="Empty pattern"
|
||||
),
|
||||
]
|
||||
|
||||
result = rule_engine.evaluate_rules(rules, input_data)
|
||||
# Empty string is contained in any string
|
||||
assert "Empty pattern" in result.get("systemMessage", "")
|
||||
|
||||
def test_special_characters_in_pattern(self, rule_engine: RuleEngine):
|
||||
"""Test patterns with special regex characters when using 'contains'."""
|
||||
input_data = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Bash",
|
||||
"tool_input": {"command": "echo $HOME"}
|
||||
}
|
||||
|
||||
rules = [
|
||||
_make_rule(
|
||||
name="test-rule",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "$HOME"}],
|
||||
action="warn",
|
||||
message="Found $HOME"
|
||||
),
|
||||
]
|
||||
|
||||
result = rule_engine.evaluate_rules(rules, input_data)
|
||||
# 'contains' does literal string matching, not regex
|
||||
assert "Found $HOME" in result.get("systemMessage", "")
|
||||
|
||||
|
||||
class TestConcurrentRuleEvaluation:
|
||||
"""Tests for multiple rules with various states."""
|
||||
|
||||
def test_mixed_match_states(self, rule_engine: RuleEngine):
|
||||
"""Test evaluation with mix of matching and non-matching rules."""
|
||||
input_data = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Bash",
|
||||
"tool_input": {"command": "ls -la"}
|
||||
}
|
||||
|
||||
rules = [
|
||||
_make_rule(
|
||||
name="match-ls",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "ls"}],
|
||||
action="warn",
|
||||
message="Found ls"
|
||||
),
|
||||
_make_rule(
|
||||
name="no-match-rm",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "rm"}],
|
||||
action="block",
|
||||
message="Found rm"
|
||||
),
|
||||
_make_rule(
|
||||
name="match-dash",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "-"}],
|
||||
action="warn",
|
||||
message="Found dash"
|
||||
),
|
||||
]
|
||||
|
||||
result = rule_engine.evaluate_rules(rules, input_data)
|
||||
|
||||
# Should have warnings from matching rules
|
||||
assert "Found ls" in result.get("systemMessage", "")
|
||||
assert "Found dash" in result.get("systemMessage", "")
|
||||
# Should not have blocking (rm rule didn't match)
|
||||
assert "hookSpecificOutput" not in result
|
||||
|
||||
def test_empty_rules_list(self, rule_engine: RuleEngine):
|
||||
"""Test evaluation with empty rules list."""
|
||||
input_data = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Bash",
|
||||
"tool_input": {"command": "ls"}
|
||||
}
|
||||
|
||||
result = rule_engine.evaluate_rules([], input_data)
|
||||
assert result == {}
|
||||
|
||||
|
||||
# Helper function to create rules for tests
|
||||
def _make_rule(name, event, conditions, action="warn", message="Test", enabled=True, tool_matcher=None):
|
||||
"""Helper to create Rule objects."""
|
||||
from hookify.core.config_loader import Rule, Condition
|
||||
|
||||
cond_objects = [
|
||||
Condition(
|
||||
field=c.get("field", ""),
|
||||
operator=c.get("operator", "regex_match"),
|
||||
pattern=c.get("pattern", "")
|
||||
)
|
||||
for c in conditions
|
||||
]
|
||||
return Rule(
|
||||
name=name,
|
||||
enabled=enabled,
|
||||
event=event,
|
||||
conditions=cond_objects,
|
||||
action=action,
|
||||
message=message,
|
||||
tool_matcher=tool_matcher
|
||||
)
|
||||
662
plugins/hookify/tests/test_integration.py
Normal file
662
plugins/hookify/tests/test_integration.py
Normal file
@@ -0,0 +1,662 @@
|
||||
"""Integration tests for multi-hook scenarios in hookify.
|
||||
|
||||
Tests cover:
|
||||
- Multiple hooks running against same input
|
||||
- Hook priority (blocking rules over warnings)
|
||||
- Cross-event state management
|
||||
- Different tool types with varying field structures
|
||||
- Error handling and fault tolerance
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from typing import Dict, Any, List
|
||||
|
||||
from hookify.core.config_loader import Rule, Condition, load_rules
|
||||
from hookify.core.rule_engine import RuleEngine
|
||||
|
||||
|
||||
def make_rule(
|
||||
name: str,
|
||||
event: str,
|
||||
conditions: List[Dict[str, str]],
|
||||
action: str = "warn",
|
||||
message: str = "Test message",
|
||||
enabled: bool = True,
|
||||
tool_matcher: str = None
|
||||
) -> Rule:
|
||||
"""Helper function to create Rule objects for testing."""
|
||||
cond_objects = [
|
||||
Condition(
|
||||
field=c.get("field", ""),
|
||||
operator=c.get("operator", "regex_match"),
|
||||
pattern=c.get("pattern", "")
|
||||
)
|
||||
for c in conditions
|
||||
]
|
||||
return Rule(
|
||||
name=name,
|
||||
enabled=enabled,
|
||||
event=event,
|
||||
conditions=cond_objects,
|
||||
action=action,
|
||||
message=message,
|
||||
tool_matcher=tool_matcher
|
||||
)
|
||||
|
||||
|
||||
class TestMultipleRulesEvaluation:
|
||||
"""Tests for evaluating multiple rules against the same input."""
|
||||
|
||||
def test_multiple_warning_rules_combined(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
|
||||
"""Multiple warning rules should combine their messages."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="warn-ls",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "ls"}],
|
||||
action="warn",
|
||||
message="ls command detected"
|
||||
),
|
||||
make_rule(
|
||||
name="warn-la-flag",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "-la"}],
|
||||
action="warn",
|
||||
message="-la flag detected"
|
||||
),
|
||||
]
|
||||
|
||||
result = rule_engine.evaluate_rules(rules, sample_bash_input)
|
||||
|
||||
assert "systemMessage" in result
|
||||
assert "warn-ls" in result["systemMessage"]
|
||||
assert "warn-la-flag" in result["systemMessage"]
|
||||
assert "ls command detected" in result["systemMessage"]
|
||||
assert "-la flag detected" in result["systemMessage"]
|
||||
|
||||
def test_blocking_rule_takes_priority(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
|
||||
"""Blocking rules should take priority over warnings."""
|
||||
# Modify input to trigger blocking rule
|
||||
sample_bash_input["tool_input"]["command"] = "rm -rf /tmp/test"
|
||||
|
||||
rules = [
|
||||
make_rule(
|
||||
name="warn-rm",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "rm"}],
|
||||
action="warn",
|
||||
message="rm command detected"
|
||||
),
|
||||
make_rule(
|
||||
name="block-rm-rf",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "regex_match", "pattern": r"rm\s+-rf"}],
|
||||
action="block",
|
||||
message="Dangerous rm -rf blocked!"
|
||||
),
|
||||
]
|
||||
|
||||
result = rule_engine.evaluate_rules(rules, sample_bash_input)
|
||||
|
||||
# Should have blocking output, not warning
|
||||
assert "hookSpecificOutput" in result
|
||||
assert result["hookSpecificOutput"]["permissionDecision"] == "deny"
|
||||
assert "block-rm-rf" in result["systemMessage"]
|
||||
assert "Dangerous rm -rf blocked!" in result["systemMessage"]
|
||||
|
||||
def test_multiple_blocking_rules_combined(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
|
||||
"""Multiple blocking rules should combine their messages."""
|
||||
sample_bash_input["tool_input"]["command"] = "sudo rm -rf /"
|
||||
|
||||
rules = [
|
||||
make_rule(
|
||||
name="block-sudo",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "sudo"}],
|
||||
action="block",
|
||||
message="sudo is blocked"
|
||||
),
|
||||
make_rule(
|
||||
name="block-rm-rf",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "regex_match", "pattern": r"rm\s+-rf"}],
|
||||
action="block",
|
||||
message="rm -rf is blocked"
|
||||
),
|
||||
]
|
||||
|
||||
result = rule_engine.evaluate_rules(rules, sample_bash_input)
|
||||
|
||||
assert result["hookSpecificOutput"]["permissionDecision"] == "deny"
|
||||
assert "block-sudo" in result["systemMessage"]
|
||||
assert "block-rm-rf" in result["systemMessage"]
|
||||
|
||||
def test_no_matching_rules_returns_empty(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
|
||||
"""When no rules match, result should be empty (allow operation)."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="block-delete",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "delete"}],
|
||||
action="block",
|
||||
message="delete blocked"
|
||||
),
|
||||
]
|
||||
|
||||
result = rule_engine.evaluate_rules(rules, sample_bash_input)
|
||||
assert result == {}
|
||||
|
||||
|
||||
class TestMultipleConditions:
|
||||
"""Tests for rules with multiple conditions (AND logic)."""
|
||||
|
||||
def test_all_conditions_must_match(self, rule_engine: RuleEngine, sample_write_input: Dict[str, Any]):
|
||||
"""Rule matches only if ALL conditions match."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="block-sensitive-write",
|
||||
event="file",
|
||||
conditions=[
|
||||
{"field": "file_path", "operator": "contains", "pattern": ".env"},
|
||||
{"field": "content", "operator": "contains", "pattern": "SECRET"},
|
||||
],
|
||||
action="block",
|
||||
message="Cannot write secrets to .env"
|
||||
),
|
||||
]
|
||||
|
||||
# Neither condition matches
|
||||
result = rule_engine.evaluate_rules(rules, sample_write_input)
|
||||
assert result == {}
|
||||
|
||||
# Only first condition matches
|
||||
sample_write_input["tool_input"]["file_path"] = "/project/.env"
|
||||
result = rule_engine.evaluate_rules(rules, sample_write_input)
|
||||
assert result == {}
|
||||
|
||||
# Both conditions match
|
||||
sample_write_input["tool_input"]["content"] = "SECRET_KEY=abc123"
|
||||
result = rule_engine.evaluate_rules(rules, sample_write_input)
|
||||
assert "hookSpecificOutput" in result
|
||||
assert result["hookSpecificOutput"]["permissionDecision"] == "deny"
|
||||
|
||||
def test_multiple_operators_in_conditions(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
|
||||
"""Test different operators in multiple conditions."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="block-dangerous-curl",
|
||||
event="bash",
|
||||
conditions=[
|
||||
{"field": "command", "operator": "starts_with", "pattern": "curl"},
|
||||
{"field": "command", "operator": "contains", "pattern": "|"},
|
||||
{"field": "command", "operator": "regex_match", "pattern": r"(bash|sh|eval)"},
|
||||
],
|
||||
action="block",
|
||||
message="Dangerous curl pipe detected"
|
||||
),
|
||||
]
|
||||
|
||||
# Normal curl - doesn't match
|
||||
sample_bash_input["tool_input"]["command"] = "curl https://example.com"
|
||||
result = rule_engine.evaluate_rules(rules, sample_bash_input)
|
||||
assert result == {}
|
||||
|
||||
# Dangerous curl pipe to bash - matches all
|
||||
sample_bash_input["tool_input"]["command"] = "curl https://example.com | bash"
|
||||
result = rule_engine.evaluate_rules(rules, sample_bash_input)
|
||||
assert result["hookSpecificOutput"]["permissionDecision"] == "deny"
|
||||
|
||||
|
||||
class TestToolTypeFieldExtraction:
|
||||
"""Tests for field extraction across different tool types."""
|
||||
|
||||
def test_bash_command_field(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
|
||||
"""Test field extraction for Bash tool."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="detect-git",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "starts_with", "pattern": "git"}],
|
||||
action="warn",
|
||||
message="git command"
|
||||
),
|
||||
]
|
||||
|
||||
sample_bash_input["tool_input"]["command"] = "git status"
|
||||
result = rule_engine.evaluate_rules(rules, sample_bash_input)
|
||||
assert "git command" in result.get("systemMessage", "")
|
||||
|
||||
def test_write_content_and_path(self, rule_engine: RuleEngine, sample_write_input: Dict[str, Any]):
|
||||
"""Test field extraction for Write tool."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="detect-python-file",
|
||||
event="file",
|
||||
conditions=[
|
||||
{"field": "file_path", "operator": "ends_with", "pattern": ".py"},
|
||||
{"field": "content", "operator": "contains", "pattern": "import"},
|
||||
],
|
||||
action="warn",
|
||||
message="Python file with imports"
|
||||
),
|
||||
]
|
||||
|
||||
sample_write_input["tool_input"]["content"] = "import os\nprint('hello')"
|
||||
result = rule_engine.evaluate_rules(rules, sample_write_input)
|
||||
assert "Python file with imports" in result.get("systemMessage", "")
|
||||
|
||||
def test_edit_old_and_new_string(self, rule_engine: RuleEngine, sample_edit_input: Dict[str, Any]):
|
||||
"""Test field extraction for Edit tool (old_string and new_string)."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="detect-password-removal",
|
||||
event="file",
|
||||
conditions=[
|
||||
{"field": "old_string", "operator": "contains", "pattern": "password"},
|
||||
],
|
||||
action="warn",
|
||||
message="Removing password-related code"
|
||||
),
|
||||
]
|
||||
|
||||
sample_edit_input["tool_input"]["old_string"] = "password = 'secret'"
|
||||
sample_edit_input["tool_input"]["new_string"] = "# removed"
|
||||
result = rule_engine.evaluate_rules(rules, sample_edit_input)
|
||||
assert "Removing password-related code" in result.get("systemMessage", "")
|
||||
|
||||
def test_multiedit_concatenated_content(self, rule_engine: RuleEngine, sample_multiedit_input: Dict[str, Any]):
|
||||
"""Test field extraction for MultiEdit tool (concatenated edits)."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="detect-eval",
|
||||
event="file",
|
||||
conditions=[
|
||||
{"field": "new_text", "operator": "contains", "pattern": "eval("},
|
||||
],
|
||||
action="block",
|
||||
message="eval() is dangerous"
|
||||
),
|
||||
]
|
||||
|
||||
# Add an edit containing eval
|
||||
sample_multiedit_input["tool_input"]["edits"] = [
|
||||
{"old_string": "process()", "new_string": "eval(user_input)"},
|
||||
{"old_string": "foo", "new_string": "bar"},
|
||||
]
|
||||
result = rule_engine.evaluate_rules(rules, sample_multiedit_input)
|
||||
assert result["hookSpecificOutput"]["permissionDecision"] == "deny"
|
||||
|
||||
|
||||
class TestStopEventIntegration:
|
||||
"""Tests for Stop event hook scenarios."""
|
||||
|
||||
def test_stop_with_transcript_check(self, rule_engine: RuleEngine, sample_stop_input: Dict[str, Any]):
|
||||
"""Test Stop event that checks transcript content."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="require-tests",
|
||||
event="stop",
|
||||
conditions=[
|
||||
{"field": "transcript", "operator": "not_contains", "pattern": "npm test"},
|
||||
],
|
||||
action="block",
|
||||
message="Please run tests before stopping"
|
||||
),
|
||||
]
|
||||
|
||||
# Transcript contains "npm test", so rule should NOT match
|
||||
result = rule_engine.evaluate_rules(rules, sample_stop_input)
|
||||
assert result == {}
|
||||
|
||||
def test_stop_blocks_without_tests(self, rule_engine: RuleEngine, temp_project_dir):
|
||||
"""Test Stop event blocks when tests weren't run."""
|
||||
# Create transcript without test command
|
||||
transcript_file = temp_project_dir / "no_tests_transcript.txt"
|
||||
transcript_file.write_text("""
|
||||
User: Implement the feature
|
||||
Assistant: Done!
|
||||
""")
|
||||
|
||||
stop_input = {
|
||||
"hook_event_name": "Stop",
|
||||
"reason": "Task completed",
|
||||
"transcript_path": str(transcript_file),
|
||||
}
|
||||
|
||||
rules = [
|
||||
make_rule(
|
||||
name="require-tests",
|
||||
event="stop",
|
||||
conditions=[
|
||||
{"field": "transcript", "operator": "not_contains", "pattern": "test"},
|
||||
],
|
||||
action="block",
|
||||
message="Please run tests before stopping"
|
||||
),
|
||||
]
|
||||
|
||||
rule_engine = RuleEngine()
|
||||
result = rule_engine.evaluate_rules(rules, stop_input)
|
||||
|
||||
assert result["decision"] == "block"
|
||||
assert "require-tests" in result["systemMessage"]
|
||||
|
||||
def test_stop_reason_field(self, rule_engine: RuleEngine, sample_stop_input: Dict[str, Any]):
|
||||
"""Test Stop event checking the reason field."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="no-early-exit",
|
||||
event="stop",
|
||||
conditions=[
|
||||
{"field": "reason", "operator": "contains", "pattern": "giving up"},
|
||||
],
|
||||
action="block",
|
||||
message="Don't give up! Try a different approach."
|
||||
),
|
||||
]
|
||||
|
||||
# Normal reason - doesn't match
|
||||
result = rule_engine.evaluate_rules(rules, sample_stop_input)
|
||||
assert result == {}
|
||||
|
||||
# Giving up reason - matches
|
||||
sample_stop_input["reason"] = "giving up on this task"
|
||||
result = rule_engine.evaluate_rules(rules, sample_stop_input)
|
||||
assert "Don't give up" in result.get("systemMessage", "")
|
||||
|
||||
|
||||
class TestUserPromptSubmitIntegration:
|
||||
"""Tests for UserPromptSubmit event hook scenarios."""
|
||||
|
||||
def test_prompt_content_validation(self, rule_engine: RuleEngine, sample_userprompt_input: Dict[str, Any]):
|
||||
"""Test validating user prompt content."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="warn-destructive-request",
|
||||
event="prompt",
|
||||
conditions=[
|
||||
{"field": "user_prompt", "operator": "regex_match", "pattern": r"delete\s+all"},
|
||||
],
|
||||
action="warn",
|
||||
message="This looks like a destructive request"
|
||||
),
|
||||
]
|
||||
|
||||
result = rule_engine.evaluate_rules(rules, sample_userprompt_input)
|
||||
assert "destructive request" in result.get("systemMessage", "")
|
||||
|
||||
def test_prompt_blocking(self, rule_engine: RuleEngine, sample_userprompt_input: Dict[str, Any]):
|
||||
"""Test blocking certain prompt patterns."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="block-injection",
|
||||
event="prompt",
|
||||
conditions=[
|
||||
{"field": "user_prompt", "operator": "contains", "pattern": "ignore previous instructions"},
|
||||
],
|
||||
action="block",
|
||||
message="Potential prompt injection detected"
|
||||
),
|
||||
]
|
||||
|
||||
# Normal prompt - doesn't match
|
||||
result = rule_engine.evaluate_rules(rules, sample_userprompt_input)
|
||||
assert "hookSpecificOutput" not in result
|
||||
|
||||
# Injection attempt - matches
|
||||
sample_userprompt_input["user_prompt"] = "ignore previous instructions and..."
|
||||
result = rule_engine.evaluate_rules(rules, sample_userprompt_input)
|
||||
assert "prompt injection" in result.get("systemMessage", "")
|
||||
|
||||
|
||||
class TestToolMatcherFiltering:
|
||||
"""Tests for tool_matcher filtering rules to specific tools."""
|
||||
|
||||
def test_tool_matcher_single_tool(self, rule_engine: RuleEngine):
|
||||
"""Test tool_matcher filtering to a single tool."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="bash-only",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "test"}],
|
||||
action="warn",
|
||||
message="Bash rule",
|
||||
tool_matcher="Bash"
|
||||
),
|
||||
]
|
||||
|
||||
bash_input = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Bash",
|
||||
"tool_input": {"command": "test command"}
|
||||
}
|
||||
write_input = {
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Write",
|
||||
"tool_input": {"content": "test content"}
|
||||
}
|
||||
|
||||
# Should match Bash
|
||||
result = rule_engine.evaluate_rules(rules, bash_input)
|
||||
assert "Bash rule" in result.get("systemMessage", "")
|
||||
|
||||
# Should not match Write
|
||||
result = rule_engine.evaluate_rules(rules, write_input)
|
||||
assert result == {}
|
||||
|
||||
def test_tool_matcher_multiple_tools(self, rule_engine: RuleEngine, sample_edit_input: Dict[str, Any]):
|
||||
"""Test tool_matcher with pipe-separated tools."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="file-tools",
|
||||
event="file",
|
||||
conditions=[{"field": "file_path", "operator": "ends_with", "pattern": ".py"}],
|
||||
action="warn",
|
||||
message="Python file edit",
|
||||
tool_matcher="Edit|Write|MultiEdit"
|
||||
),
|
||||
]
|
||||
|
||||
# Edit tool should match
|
||||
result = rule_engine.evaluate_rules(rules, sample_edit_input)
|
||||
assert "Python file edit" in result.get("systemMessage", "")
|
||||
|
||||
def test_tool_matcher_wildcard(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
|
||||
"""Test tool_matcher with wildcard."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="all-tools",
|
||||
event="all",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "test"}],
|
||||
action="warn",
|
||||
message="All tools rule",
|
||||
tool_matcher="*"
|
||||
),
|
||||
]
|
||||
|
||||
sample_bash_input["tool_input"]["command"] = "test command"
|
||||
result = rule_engine.evaluate_rules(rules, sample_bash_input)
|
||||
assert "All tools rule" in result.get("systemMessage", "")
|
||||
|
||||
|
||||
class TestRegexOperations:
|
||||
"""Tests for regex pattern matching and caching."""
|
||||
|
||||
def test_complex_regex_patterns(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
|
||||
"""Test complex regex patterns."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="detect-secret-env",
|
||||
event="bash",
|
||||
conditions=[
|
||||
{"field": "command", "operator": "regex_match",
|
||||
"pattern": r"(SECRET|PASSWORD|API_KEY|TOKEN)[\s]*="},
|
||||
],
|
||||
action="block",
|
||||
message="Secret assignment detected"
|
||||
),
|
||||
]
|
||||
|
||||
# Test various patterns
|
||||
test_cases = [
|
||||
("export SECRET=abc", True),
|
||||
("export PASSWORD = abc", True),
|
||||
("export API_KEY=xyz", True),
|
||||
("export TOKEN=123", True),
|
||||
("export NAME=test", False),
|
||||
("echo hello", False),
|
||||
]
|
||||
|
||||
for command, should_match in test_cases:
|
||||
sample_bash_input["tool_input"]["command"] = command
|
||||
result = rule_engine.evaluate_rules(rules, sample_bash_input)
|
||||
if should_match:
|
||||
assert "hookSpecificOutput" in result, f"Expected match for: {command}"
|
||||
else:
|
||||
assert result == {}, f"Expected no match for: {command}"
|
||||
|
||||
def test_case_insensitive_matching(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
|
||||
"""Test that regex matching is case-insensitive."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="detect-sudo",
|
||||
event="bash",
|
||||
conditions=[
|
||||
{"field": "command", "operator": "regex_match", "pattern": "sudo"},
|
||||
],
|
||||
action="warn",
|
||||
message="sudo detected"
|
||||
),
|
||||
]
|
||||
|
||||
# Should match regardless of case
|
||||
for cmd in ["sudo apt install", "SUDO apt install", "Sudo apt install"]:
|
||||
sample_bash_input["tool_input"]["command"] = cmd
|
||||
result = rule_engine.evaluate_rules(rules, sample_bash_input)
|
||||
assert "sudo detected" in result.get("systemMessage", ""), f"Failed for: {cmd}"
|
||||
|
||||
def test_invalid_regex_handled_gracefully(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
|
||||
"""Test that invalid regex patterns don't crash."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="invalid-regex",
|
||||
event="bash",
|
||||
conditions=[
|
||||
{"field": "command", "operator": "regex_match", "pattern": "[invalid(regex"},
|
||||
],
|
||||
action="block",
|
||||
message="Should not match"
|
||||
),
|
||||
]
|
||||
|
||||
# Should not crash, should return empty (no match)
|
||||
result = rule_engine.evaluate_rules(rules, sample_bash_input)
|
||||
assert result == {}
|
||||
|
||||
|
||||
class TestDisabledRules:
|
||||
"""Tests for disabled rule handling."""
|
||||
|
||||
def test_disabled_rules_not_evaluated(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
|
||||
"""Disabled rules should not be evaluated."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="disabled-rule",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "ls"}],
|
||||
action="block",
|
||||
message="Should not appear",
|
||||
enabled=False
|
||||
),
|
||||
make_rule(
|
||||
name="enabled-rule",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "ls"}],
|
||||
action="warn",
|
||||
message="Enabled rule matched",
|
||||
enabled=True
|
||||
),
|
||||
]
|
||||
|
||||
# Filter to only enabled rules (as load_rules does)
|
||||
enabled_rules = [r for r in rules if r.enabled]
|
||||
result = rule_engine.evaluate_rules(enabled_rules, sample_bash_input)
|
||||
|
||||
assert "Enabled rule matched" in result.get("systemMessage", "")
|
||||
assert "Should not appear" not in result.get("systemMessage", "")
|
||||
|
||||
|
||||
class TestRulesWithNoConditions:
|
||||
"""Tests for edge cases with empty conditions."""
|
||||
|
||||
def test_rule_without_conditions_does_not_match(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
|
||||
"""Rules without conditions should not match anything."""
|
||||
rule = Rule(
|
||||
name="empty-conditions",
|
||||
enabled=True,
|
||||
event="bash",
|
||||
conditions=[], # Empty conditions
|
||||
action="warn",
|
||||
message="Should not match"
|
||||
)
|
||||
|
||||
result = rule_engine.evaluate_rules([rule], sample_bash_input)
|
||||
assert result == {}
|
||||
|
||||
|
||||
class TestOutputFormats:
|
||||
"""Tests for correct output format for different event types."""
|
||||
|
||||
def test_pretooluse_blocking_format(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
|
||||
"""PreToolUse blocking should use hookSpecificOutput format."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="block-test",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "ls"}],
|
||||
action="block",
|
||||
message="Blocked"
|
||||
),
|
||||
]
|
||||
|
||||
result = rule_engine.evaluate_rules(rules, sample_bash_input)
|
||||
|
||||
assert "hookSpecificOutput" in result
|
||||
assert result["hookSpecificOutput"]["hookEventName"] == "PreToolUse"
|
||||
assert result["hookSpecificOutput"]["permissionDecision"] == "deny"
|
||||
assert "systemMessage" in result
|
||||
|
||||
def test_stop_blocking_format(self, rule_engine: RuleEngine, sample_stop_input: Dict[str, Any]):
|
||||
"""Stop blocking should use decision format."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="block-stop",
|
||||
event="stop",
|
||||
conditions=[{"field": "reason", "operator": "contains", "pattern": "completed"}],
|
||||
action="block",
|
||||
message="Blocked"
|
||||
),
|
||||
]
|
||||
|
||||
result = rule_engine.evaluate_rules(rules, sample_stop_input)
|
||||
|
||||
assert result.get("decision") == "block"
|
||||
assert "reason" in result
|
||||
assert "systemMessage" in result
|
||||
|
||||
def test_warning_format(self, rule_engine: RuleEngine, sample_bash_input: Dict[str, Any]):
|
||||
"""Warning should only have systemMessage, not hookSpecificOutput."""
|
||||
rules = [
|
||||
make_rule(
|
||||
name="warn-test",
|
||||
event="bash",
|
||||
conditions=[{"field": "command", "operator": "contains", "pattern": "ls"}],
|
||||
action="warn",
|
||||
message="Warning"
|
||||
),
|
||||
]
|
||||
|
||||
result = rule_engine.evaluate_rules(rules, sample_bash_input)
|
||||
|
||||
assert "systemMessage" in result
|
||||
assert "hookSpecificOutput" not in result
|
||||
410
plugins/hookify/tests/test_rule_loading.py
Normal file
410
plugins/hookify/tests/test_rule_loading.py
Normal file
@@ -0,0 +1,410 @@
|
||||
"""Tests for rule loading and filtering from .local.md files.
|
||||
|
||||
Tests cover:
|
||||
- Loading multiple rule files
|
||||
- Event-based filtering
|
||||
- YAML frontmatter parsing
|
||||
- Legacy pattern to conditions conversion
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from pathlib import Path
|
||||
|
||||
from hookify.core.config_loader import (
|
||||
Rule, Condition, load_rules, load_rule_file, extract_frontmatter
|
||||
)
|
||||
|
||||
|
||||
class TestExtractFrontmatter:
|
||||
"""Tests for YAML frontmatter extraction."""
|
||||
|
||||
def test_simple_frontmatter(self):
|
||||
"""Test parsing simple key-value pairs."""
|
||||
content = """---
|
||||
name: test-rule
|
||||
enabled: true
|
||||
event: bash
|
||||
action: warn
|
||||
---
|
||||
|
||||
Rule message here.
|
||||
"""
|
||||
frontmatter, message = extract_frontmatter(content)
|
||||
|
||||
assert frontmatter["name"] == "test-rule"
|
||||
assert frontmatter["enabled"] is True
|
||||
assert frontmatter["event"] == "bash"
|
||||
assert frontmatter["action"] == "warn"
|
||||
assert message == "Rule message here."
|
||||
|
||||
def test_boolean_values(self):
|
||||
"""Test boolean value parsing (true/false)."""
|
||||
content = """---
|
||||
enabled: true
|
||||
disabled: false
|
||||
---
|
||||
msg
|
||||
"""
|
||||
frontmatter, _ = extract_frontmatter(content)
|
||||
|
||||
assert frontmatter["enabled"] is True
|
||||
assert frontmatter["disabled"] is False
|
||||
|
||||
def test_quoted_strings(self):
|
||||
"""Test quoted string parsing."""
|
||||
content = """---
|
||||
pattern: "rm -rf"
|
||||
name: 'test-name'
|
||||
---
|
||||
msg
|
||||
"""
|
||||
frontmatter, _ = extract_frontmatter(content)
|
||||
|
||||
assert frontmatter["pattern"] == "rm -rf"
|
||||
assert frontmatter["name"] == "test-name"
|
||||
|
||||
def test_conditions_list(self):
|
||||
"""Test parsing conditions as list of dicts."""
|
||||
content = """---
|
||||
name: test
|
||||
conditions:
|
||||
- field: command
|
||||
operator: contains
|
||||
pattern: test
|
||||
- field: file_path
|
||||
operator: ends_with
|
||||
pattern: .py
|
||||
---
|
||||
msg
|
||||
"""
|
||||
frontmatter, _ = extract_frontmatter(content)
|
||||
|
||||
assert "conditions" in frontmatter
|
||||
assert len(frontmatter["conditions"]) == 2
|
||||
assert frontmatter["conditions"][0]["field"] == "command"
|
||||
assert frontmatter["conditions"][0]["operator"] == "contains"
|
||||
assert frontmatter["conditions"][1]["pattern"] == ".py"
|
||||
|
||||
def test_inline_dict_conditions(self):
|
||||
"""Test parsing inline comma-separated dict items."""
|
||||
content = """---
|
||||
name: test
|
||||
conditions:
|
||||
- field: command, operator: regex_match, pattern: test
|
||||
---
|
||||
msg
|
||||
"""
|
||||
frontmatter, _ = extract_frontmatter(content)
|
||||
|
||||
assert len(frontmatter["conditions"]) == 1
|
||||
assert frontmatter["conditions"][0]["field"] == "command"
|
||||
assert frontmatter["conditions"][0]["operator"] == "regex_match"
|
||||
|
||||
def test_no_frontmatter(self):
|
||||
"""Test handling content without frontmatter."""
|
||||
content = "Just plain text without frontmatter"
|
||||
frontmatter, message = extract_frontmatter(content)
|
||||
|
||||
assert frontmatter == {}
|
||||
assert message == content
|
||||
|
||||
def test_incomplete_frontmatter(self):
|
||||
"""Test handling incomplete frontmatter markers."""
|
||||
content = """---
|
||||
name: test
|
||||
No closing marker
|
||||
"""
|
||||
frontmatter, _ = extract_frontmatter(content)
|
||||
assert frontmatter == {}
|
||||
|
||||
|
||||
class TestLoadRuleFile:
|
||||
"""Tests for loading individual rule files."""
|
||||
|
||||
def test_load_valid_rule(self, create_rule_file):
|
||||
"""Test loading a valid rule file."""
|
||||
content = """---
|
||||
name: valid-rule
|
||||
enabled: true
|
||||
event: bash
|
||||
action: block
|
||||
conditions:
|
||||
- field: command
|
||||
operator: contains
|
||||
pattern: danger
|
||||
---
|
||||
|
||||
This is a dangerous command!
|
||||
"""
|
||||
rule_file = create_rule_file("valid-rule", content)
|
||||
rule = load_rule_file(str(rule_file))
|
||||
|
||||
assert rule is not None
|
||||
assert rule.name == "valid-rule"
|
||||
assert rule.enabled is True
|
||||
assert rule.event == "bash"
|
||||
assert rule.action == "block"
|
||||
assert len(rule.conditions) == 1
|
||||
assert rule.conditions[0].field == "command"
|
||||
assert "dangerous command" in rule.message
|
||||
|
||||
def test_load_legacy_pattern_rule(self, create_rule_file):
|
||||
"""Test loading rule with legacy pattern (converts to condition)."""
|
||||
content = """---
|
||||
name: legacy-rule
|
||||
enabled: true
|
||||
event: bash
|
||||
pattern: "rm -rf"
|
||||
---
|
||||
|
||||
Old style rule.
|
||||
"""
|
||||
rule_file = create_rule_file("legacy-rule", content)
|
||||
rule = load_rule_file(str(rule_file))
|
||||
|
||||
assert rule is not None
|
||||
assert len(rule.conditions) == 1
|
||||
assert rule.conditions[0].field == "command" # Inferred from bash event
|
||||
assert rule.conditions[0].operator == "regex_match"
|
||||
assert rule.conditions[0].pattern == "rm -rf"
|
||||
|
||||
def test_load_file_event_legacy_pattern(self, create_rule_file):
|
||||
"""Test legacy pattern with file event infers correct field."""
|
||||
content = """---
|
||||
name: file-legacy
|
||||
enabled: true
|
||||
event: file
|
||||
pattern: "TODO"
|
||||
---
|
||||
|
||||
Found TODO.
|
||||
"""
|
||||
rule_file = create_rule_file("file-legacy", content)
|
||||
rule = load_rule_file(str(rule_file))
|
||||
|
||||
assert rule.conditions[0].field == "new_text"
|
||||
|
||||
def test_load_missing_frontmatter(self, create_rule_file):
|
||||
"""Test loading file without frontmatter returns None."""
|
||||
content = "No frontmatter here"
|
||||
rule_file = create_rule_file("no-frontmatter", content)
|
||||
rule = load_rule_file(str(rule_file))
|
||||
|
||||
assert rule is None
|
||||
|
||||
def test_load_nonexistent_file(self):
|
||||
"""Test loading nonexistent file returns None."""
|
||||
rule = load_rule_file("/nonexistent/path/hookify.test.local.md")
|
||||
assert rule is None
|
||||
|
||||
|
||||
class TestLoadRules:
|
||||
"""Tests for loading multiple rules with filtering."""
|
||||
|
||||
def test_load_multiple_rules(self, temp_project_dir, create_rule_file):
|
||||
"""Test loading multiple rule files."""
|
||||
create_rule_file("rule1", """---
|
||||
name: rule-one
|
||||
enabled: true
|
||||
event: bash
|
||||
conditions:
|
||||
- field: command
|
||||
operator: contains
|
||||
pattern: test1
|
||||
---
|
||||
Rule 1
|
||||
""")
|
||||
create_rule_file("rule2", """---
|
||||
name: rule-two
|
||||
enabled: true
|
||||
event: bash
|
||||
conditions:
|
||||
- field: command
|
||||
operator: contains
|
||||
pattern: test2
|
||||
---
|
||||
Rule 2
|
||||
""")
|
||||
|
||||
rules = load_rules()
|
||||
|
||||
assert len(rules) == 2
|
||||
names = {r.name for r in rules}
|
||||
assert "rule-one" in names
|
||||
assert "rule-two" in names
|
||||
|
||||
def test_filter_by_event(self, temp_project_dir, create_rule_file):
|
||||
"""Test filtering rules by event type."""
|
||||
create_rule_file("bash-rule", """---
|
||||
name: bash-rule
|
||||
enabled: true
|
||||
event: bash
|
||||
conditions:
|
||||
- field: command
|
||||
operator: contains
|
||||
pattern: test
|
||||
---
|
||||
Bash rule
|
||||
""")
|
||||
create_rule_file("file-rule", """---
|
||||
name: file-rule
|
||||
enabled: true
|
||||
event: file
|
||||
conditions:
|
||||
- field: content
|
||||
operator: contains
|
||||
pattern: test
|
||||
---
|
||||
File rule
|
||||
""")
|
||||
create_rule_file("all-rule", """---
|
||||
name: all-rule
|
||||
enabled: true
|
||||
event: all
|
||||
conditions:
|
||||
- field: content
|
||||
operator: contains
|
||||
pattern: test
|
||||
---
|
||||
All events rule
|
||||
""")
|
||||
|
||||
# Filter for bash events
|
||||
bash_rules = load_rules(event="bash")
|
||||
bash_names = {r.name for r in bash_rules}
|
||||
assert "bash-rule" in bash_names
|
||||
assert "all-rule" in bash_names # 'all' matches any event
|
||||
assert "file-rule" not in bash_names
|
||||
|
||||
# Filter for file events
|
||||
file_rules = load_rules(event="file")
|
||||
file_names = {r.name for r in file_rules}
|
||||
assert "file-rule" in file_names
|
||||
assert "all-rule" in file_names
|
||||
assert "bash-rule" not in file_names
|
||||
|
||||
def test_filter_excludes_disabled(self, temp_project_dir, create_rule_file):
|
||||
"""Test that disabled rules are excluded."""
|
||||
create_rule_file("enabled-rule", """---
|
||||
name: enabled-rule
|
||||
enabled: true
|
||||
event: bash
|
||||
conditions:
|
||||
- field: command
|
||||
operator: contains
|
||||
pattern: test
|
||||
---
|
||||
Enabled
|
||||
""")
|
||||
create_rule_file("disabled-rule", """---
|
||||
name: disabled-rule
|
||||
enabled: false
|
||||
event: bash
|
||||
conditions:
|
||||
- field: command
|
||||
operator: contains
|
||||
pattern: test
|
||||
---
|
||||
Disabled
|
||||
""")
|
||||
|
||||
rules = load_rules()
|
||||
|
||||
assert len(rules) == 1
|
||||
assert rules[0].name == "enabled-rule"
|
||||
|
||||
def test_load_rules_handles_invalid_file(self, temp_project_dir, create_rule_file):
|
||||
"""Test that invalid files are skipped without crashing."""
|
||||
# Valid rule
|
||||
create_rule_file("valid", """---
|
||||
name: valid
|
||||
enabled: true
|
||||
event: bash
|
||||
conditions:
|
||||
- field: command
|
||||
operator: contains
|
||||
pattern: test
|
||||
---
|
||||
Valid rule
|
||||
""")
|
||||
# Invalid rule (no frontmatter)
|
||||
create_rule_file("invalid", "No frontmatter")
|
||||
|
||||
rules = load_rules()
|
||||
|
||||
# Should only load the valid rule
|
||||
assert len(rules) == 1
|
||||
assert rules[0].name == "valid"
|
||||
|
||||
def test_load_with_no_rules(self, temp_project_dir):
|
||||
"""Test loading when no rule files exist."""
|
||||
rules = load_rules()
|
||||
assert rules == []
|
||||
|
||||
|
||||
class TestRuleFromDict:
|
||||
"""Tests for Rule.from_dict construction."""
|
||||
|
||||
def test_defaults(self):
|
||||
"""Test default values for optional fields."""
|
||||
frontmatter = {
|
||||
"name": "test",
|
||||
"event": "bash",
|
||||
}
|
||||
rule = Rule.from_dict(frontmatter, "message")
|
||||
|
||||
assert rule.name == "test"
|
||||
assert rule.enabled is True # Default
|
||||
assert rule.action == "warn" # Default
|
||||
assert rule.message == "message"
|
||||
|
||||
def test_explicit_values(self):
|
||||
"""Test explicit values override defaults."""
|
||||
frontmatter = {
|
||||
"name": "test",
|
||||
"enabled": False,
|
||||
"event": "file",
|
||||
"action": "block",
|
||||
"tool_matcher": "Write|Edit",
|
||||
}
|
||||
rule = Rule.from_dict(frontmatter, "message")
|
||||
|
||||
assert rule.enabled is False
|
||||
assert rule.event == "file"
|
||||
assert rule.action == "block"
|
||||
assert rule.tool_matcher == "Write|Edit"
|
||||
|
||||
|
||||
class TestConditionFromDict:
|
||||
"""Tests for Condition.from_dict construction."""
|
||||
|
||||
def test_all_fields(self):
|
||||
"""Test creating condition with all fields."""
|
||||
data = {
|
||||
"field": "command",
|
||||
"operator": "regex_match",
|
||||
"pattern": r"rm\s+-rf"
|
||||
}
|
||||
condition = Condition.from_dict(data)
|
||||
|
||||
assert condition.field == "command"
|
||||
assert condition.operator == "regex_match"
|
||||
assert condition.pattern == r"rm\s+-rf"
|
||||
|
||||
def test_default_operator(self):
|
||||
"""Test default operator is regex_match."""
|
||||
data = {
|
||||
"field": "command",
|
||||
"pattern": "test"
|
||||
}
|
||||
condition = Condition.from_dict(data)
|
||||
|
||||
assert condition.operator == "regex_match"
|
||||
|
||||
def test_missing_fields(self):
|
||||
"""Test missing fields default to empty strings."""
|
||||
data = {}
|
||||
condition = Condition.from_dict(data)
|
||||
|
||||
assert condition.field == ""
|
||||
assert condition.pattern == ""
|
||||
@@ -1,28 +1,22 @@
|
||||
#!/usr/bin/env bash
|
||||
#
|
||||
# Comments on a GitHub issue with a list of potential duplicates.
|
||||
# Usage: ./comment-on-duplicates.sh --potential-duplicates 456 789 101
|
||||
#
|
||||
# The base issue number is read from the workflow event payload.
|
||||
# Usage: ./comment-on-duplicates.sh --base-issue 123 --potential-duplicates 456 789 101
|
||||
#
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
REPO="anthropics/claude-code"
|
||||
|
||||
# Read from event payload so the issue number is bound to the triggering event.
|
||||
# Falls back to workflow_dispatch inputs for manual runs.
|
||||
BASE_ISSUE=$(jq -r '.issue.number // .inputs.issue_number // empty' "${GITHUB_EVENT_PATH:?GITHUB_EVENT_PATH not set}")
|
||||
if ! [[ "$BASE_ISSUE" =~ ^[0-9]+$ ]]; then
|
||||
echo "Error: no issue number in event payload" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
BASE_ISSUE=""
|
||||
DUPLICATES=()
|
||||
|
||||
# Parse arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--base-issue)
|
||||
BASE_ISSUE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--potential-duplicates)
|
||||
shift
|
||||
while [[ $# -gt 0 && ! "$1" =~ ^-- ]]; do
|
||||
@@ -31,12 +25,23 @@ while [[ $# -gt 0 ]]; do
|
||||
done
|
||||
;;
|
||||
*)
|
||||
echo "Error: unknown argument (only --potential-duplicates is accepted)" >&2
|
||||
echo "Unknown option: $1" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Validate base issue
|
||||
if [[ -z "$BASE_ISSUE" ]]; then
|
||||
echo "Error: --base-issue is required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! [[ "$BASE_ISSUE" =~ ^[0-9]+$ ]]; then
|
||||
echo "Error: --base-issue must be a number, got: $BASE_ISSUE" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate duplicates
|
||||
if [[ ${#DUPLICATES[@]} -eq 0 ]]; then
|
||||
echo "Error: --potential-duplicates requires at least one issue number" >&2
|
||||
|
||||
@@ -1,84 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
#
|
||||
# Edits labels on a GitHub issue.
|
||||
# Usage: ./edit-issue-labels.sh --add-label bug --add-label needs-triage --remove-label untriaged
|
||||
#
|
||||
# The issue number is read from the workflow event payload.
|
||||
#
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Read from event payload so the issue number is bound to the triggering event.
|
||||
# Falls back to workflow_dispatch inputs for manual runs.
|
||||
ISSUE=$(jq -r '.issue.number // .inputs.issue_number // empty' "${GITHUB_EVENT_PATH:?GITHUB_EVENT_PATH not set}")
|
||||
if ! [[ "$ISSUE" =~ ^[0-9]+$ ]]; then
|
||||
echo "Error: no issue number in event payload" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ADD_LABELS=()
|
||||
REMOVE_LABELS=()
|
||||
|
||||
# Parse arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--add-label)
|
||||
ADD_LABELS+=("$2")
|
||||
shift 2
|
||||
;;
|
||||
--remove-label)
|
||||
REMOVE_LABELS+=("$2")
|
||||
shift 2
|
||||
;;
|
||||
*)
|
||||
echo "Error: unknown argument (only --add-label and --remove-label are accepted)" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ ${#ADD_LABELS[@]} -eq 0 && ${#REMOVE_LABELS[@]} -eq 0 ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Fetch valid labels from the repo
|
||||
VALID_LABELS=$(gh label list --limit 500 --json name --jq '.[].name')
|
||||
|
||||
# Filter to only labels that exist in the repo
|
||||
FILTERED_ADD=()
|
||||
for label in "${ADD_LABELS[@]}"; do
|
||||
if echo "$VALID_LABELS" | grep -qxF "$label"; then
|
||||
FILTERED_ADD+=("$label")
|
||||
fi
|
||||
done
|
||||
|
||||
FILTERED_REMOVE=()
|
||||
for label in "${REMOVE_LABELS[@]}"; do
|
||||
if echo "$VALID_LABELS" | grep -qxF "$label"; then
|
||||
FILTERED_REMOVE+=("$label")
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ ${#FILTERED_ADD[@]} -eq 0 && ${#FILTERED_REMOVE[@]} -eq 0 ]]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Build gh command arguments
|
||||
GH_ARGS=("issue" "edit" "$ISSUE")
|
||||
|
||||
for label in "${FILTERED_ADD[@]}"; do
|
||||
GH_ARGS+=("--add-label" "$label")
|
||||
done
|
||||
|
||||
for label in "${FILTERED_REMOVE[@]}"; do
|
||||
GH_ARGS+=("--remove-label" "$label")
|
||||
done
|
||||
|
||||
gh "${GH_ARGS[@]}"
|
||||
|
||||
if [[ ${#FILTERED_ADD[@]} -gt 0 ]]; then
|
||||
echo "Added: ${FILTERED_ADD[*]}"
|
||||
fi
|
||||
if [[ ${#FILTERED_REMOVE[@]} -gt 0 ]]; then
|
||||
echo "Removed: ${FILTERED_REMOVE[*]}"
|
||||
fi
|
||||
@@ -1,96 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Wrapper around gh CLI that only allows specific subcommands and flags.
|
||||
# All commands are scoped to the current repository via GH_REPO or GITHUB_REPOSITORY.
|
||||
#
|
||||
# Usage:
|
||||
# ./scripts/gh.sh issue view 123
|
||||
# ./scripts/gh.sh issue view 123 --comments
|
||||
# ./scripts/gh.sh issue list --state open --limit 20
|
||||
# ./scripts/gh.sh search issues "search query" --limit 10
|
||||
# ./scripts/gh.sh label list --limit 100
|
||||
|
||||
export GH_HOST=github.com
|
||||
|
||||
REPO="${GH_REPO:-${GITHUB_REPOSITORY:-}}"
|
||||
if [[ -z "$REPO" || "$REPO" == */*/* || "$REPO" != */* ]]; then
|
||||
echo "Error: GH_REPO or GITHUB_REPOSITORY must be set to owner/repo format (e.g., GITHUB_REPOSITORY=anthropics/claude-code)" >&2
|
||||
exit 1
|
||||
fi
|
||||
export GH_REPO="$REPO"
|
||||
|
||||
ALLOWED_FLAGS=(--comments --state --limit --label)
|
||||
FLAGS_WITH_VALUES=(--state --limit --label)
|
||||
|
||||
SUB1="${1:-}"
|
||||
SUB2="${2:-}"
|
||||
CMD="$SUB1 $SUB2"
|
||||
case "$CMD" in
|
||||
"issue view"|"issue list"|"search issues"|"label list")
|
||||
;;
|
||||
*)
|
||||
echo "Error: only 'issue view', 'issue list', 'search issues', 'label list' are allowed (e.g., ./scripts/gh.sh issue view 123)" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
shift 2
|
||||
|
||||
# Separate flags from positional arguments
|
||||
POSITIONAL=()
|
||||
FLAGS=()
|
||||
skip_next=false
|
||||
for arg in "$@"; do
|
||||
if [[ "$skip_next" == true ]]; then
|
||||
FLAGS+=("$arg")
|
||||
skip_next=false
|
||||
elif [[ "$arg" == -* ]]; then
|
||||
flag="${arg%%=*}"
|
||||
matched=false
|
||||
for allowed in "${ALLOWED_FLAGS[@]}"; do
|
||||
if [[ "$flag" == "$allowed" ]]; then
|
||||
matched=true
|
||||
break
|
||||
fi
|
||||
done
|
||||
if [[ "$matched" == false ]]; then
|
||||
echo "Error: only --comments, --state, --limit, --label flags are allowed (e.g., ./scripts/gh.sh issue list --state open --limit 20)" >&2
|
||||
exit 1
|
||||
fi
|
||||
FLAGS+=("$arg")
|
||||
# If flag expects a value and isn't using = syntax, skip next arg
|
||||
if [[ "$arg" != *=* ]]; then
|
||||
for vflag in "${FLAGS_WITH_VALUES[@]}"; do
|
||||
if [[ "$flag" == "$vflag" ]]; then
|
||||
skip_next=true
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
else
|
||||
POSITIONAL+=("$arg")
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ "$CMD" == "search issues" ]]; then
|
||||
QUERY="${POSITIONAL[0]:-}"
|
||||
QUERY_LOWER=$(echo "$QUERY" | tr '[:upper:]' '[:lower:]')
|
||||
if [[ "$QUERY_LOWER" == *"repo:"* || "$QUERY_LOWER" == *"org:"* || "$QUERY_LOWER" == *"user:"* ]]; then
|
||||
echo "Error: search query must not contain repo:, org:, or user: qualifiers (e.g., ./scripts/gh.sh search issues \"bug report\" --limit 10)" >&2
|
||||
exit 1
|
||||
fi
|
||||
gh "$SUB1" "$SUB2" "$QUERY" --repo "$REPO" "${FLAGS[@]}"
|
||||
elif [[ "$CMD" == "issue view" ]]; then
|
||||
if [[ ${#POSITIONAL[@]} -ne 1 ]] || ! [[ "${POSITIONAL[0]}" =~ ^[0-9]+$ ]]; then
|
||||
echo "Error: issue view requires exactly one numeric issue number (e.g., ./scripts/gh.sh issue view 123)" >&2
|
||||
exit 1
|
||||
fi
|
||||
gh "$SUB1" "$SUB2" "${POSITIONAL[0]}" "${FLAGS[@]}"
|
||||
else
|
||||
if [[ ${#POSITIONAL[@]} -ne 0 ]]; then
|
||||
echo "Error: issue list and label list do not accept positional arguments (e.g., ./scripts/gh.sh issue list --state open, ./scripts/gh.sh label list --limit 100)" >&2
|
||||
exit 1
|
||||
fi
|
||||
gh "$SUB1" "$SUB2" "${FLAGS[@]}"
|
||||
fi
|
||||
@@ -1,38 +0,0 @@
|
||||
// Single source of truth for issue lifecycle labels, timeouts, and messages.
|
||||
|
||||
export const lifecycle = [
|
||||
{
|
||||
label: "invalid",
|
||||
days: 3,
|
||||
reason: "this doesn't appear to be about Claude Code",
|
||||
nudge: "This doesn't appear to be about [Claude Code](https://github.com/anthropics/claude-code). For general Anthropic support, visit [support.anthropic.com](https://support.anthropic.com).",
|
||||
},
|
||||
{
|
||||
label: "needs-repro",
|
||||
days: 7,
|
||||
reason: "we still need reproduction steps to investigate",
|
||||
nudge: "We weren't able to reproduce this. Could you provide steps to trigger the issue — what you ran, what happened, and what you expected?",
|
||||
},
|
||||
{
|
||||
label: "needs-info",
|
||||
days: 7,
|
||||
reason: "we still need a bit more information to move forward",
|
||||
nudge: "We need more information to continue investigating. Can you make sure to include your Claude Code version (`claude --version`), OS, and any error messages or logs?",
|
||||
},
|
||||
{
|
||||
label: "stale",
|
||||
days: 14,
|
||||
reason: "inactive for too long",
|
||||
nudge: "This issue has been automatically marked as stale due to inactivity.",
|
||||
},
|
||||
{
|
||||
label: "autoclose",
|
||||
days: 14,
|
||||
reason: "inactive for too long",
|
||||
nudge: "This issue has been marked for automatic closure.",
|
||||
},
|
||||
] as const;
|
||||
|
||||
export type LifecycleLabel = (typeof lifecycle)[number]["label"];
|
||||
|
||||
export const STALE_UPVOTE_THRESHOLD = 10;
|
||||
@@ -1,53 +0,0 @@
|
||||
#!/usr/bin/env bun
|
||||
|
||||
// Posts a comment when a lifecycle label is applied to an issue,
|
||||
// giving the author a heads-up and a chance to respond before auto-close.
|
||||
|
||||
import { lifecycle } from "./issue-lifecycle.ts";
|
||||
|
||||
const DRY_RUN = process.argv.includes("--dry-run");
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
const repo = process.env.GITHUB_REPOSITORY; // owner/repo
|
||||
const label = process.env.LABEL;
|
||||
const issueNumber = process.env.ISSUE_NUMBER;
|
||||
|
||||
if (!DRY_RUN && !token) throw new Error("GITHUB_TOKEN required");
|
||||
if (!repo) throw new Error("GITHUB_REPOSITORY required");
|
||||
if (!label) throw new Error("LABEL required");
|
||||
if (!issueNumber) throw new Error("ISSUE_NUMBER required");
|
||||
|
||||
const entry = lifecycle.find((l) => l.label === label);
|
||||
if (!entry) {
|
||||
console.log(`No lifecycle entry for label "${label}", skipping`);
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
const body = `${entry.nudge} This issue will be closed automatically if there's no activity within ${entry.days} days.`;
|
||||
|
||||
// --
|
||||
|
||||
if (DRY_RUN) {
|
||||
console.log(`Would comment on #${issueNumber} for label "${label}":\n\n${body}`);
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
const response = await fetch(
|
||||
`https://api.github.com/repos/${repo}/issues/${issueNumber}/comments`,
|
||||
{
|
||||
method: "POST",
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
Accept: "application/vnd.github.v3+json",
|
||||
"Content-Type": "application/json",
|
||||
"User-Agent": "lifecycle-comment",
|
||||
},
|
||||
body: JSON.stringify({ body }),
|
||||
}
|
||||
);
|
||||
|
||||
if (!response.ok) {
|
||||
const text = await response.text();
|
||||
throw new Error(`GitHub API ${response.status}: ${text}`);
|
||||
}
|
||||
|
||||
console.log(`Commented on #${issueNumber} for label "${label}"`);
|
||||
@@ -1,15 +1,23 @@
|
||||
#!/usr/bin/env bun
|
||||
|
||||
import { lifecycle, STALE_UPVOTE_THRESHOLD } from "./issue-lifecycle.ts";
|
||||
|
||||
// --
|
||||
|
||||
const NEW_ISSUE = "https://github.com/anthropics/claude-code/issues/new/choose";
|
||||
const DRY_RUN = process.argv.includes("--dry-run");
|
||||
const STALE_DAYS = 14;
|
||||
const STALE_UPVOTE_THRESHOLD = 10;
|
||||
|
||||
const CLOSE_MESSAGE = (reason: string) =>
|
||||
`Closing for now — ${reason}. Please [open a new issue](${NEW_ISSUE}) if this is still relevant.`;
|
||||
|
||||
const lifecycle = [
|
||||
{ label: "invalid", days: 3, reason: "this doesn't appear to be about Claude Code" },
|
||||
{ label: "needs-repro", days: 7, reason: "we still need reproduction steps to investigate" },
|
||||
{ label: "needs-info", days: 7, reason: "we still need a bit more information to move forward" },
|
||||
{ label: "stale", days: 14, reason: "inactive for too long" },
|
||||
{ label: "autoclose", days: 14, reason: "inactive for too long" },
|
||||
];
|
||||
|
||||
// --
|
||||
|
||||
async function githubRequest<T>(
|
||||
@@ -43,13 +51,12 @@ async function githubRequest<T>(
|
||||
// --
|
||||
|
||||
async function markStale(owner: string, repo: string) {
|
||||
const staleDays = lifecycle.find((l) => l.label === "stale")!.days;
|
||||
const cutoff = new Date();
|
||||
cutoff.setDate(cutoff.getDate() - staleDays);
|
||||
cutoff.setDate(cutoff.getDate() - STALE_DAYS);
|
||||
|
||||
let labeled = 0;
|
||||
|
||||
console.log(`\n=== marking stale (${staleDays}d inactive) ===`);
|
||||
console.log(`\n=== marking stale (${STALE_DAYS}d inactive) ===`);
|
||||
|
||||
for (let page = 1; page <= 10; page++) {
|
||||
const issues = await githubRequest<any[]>(
|
||||
@@ -70,8 +77,11 @@ async function markStale(owner: string, repo: string) {
|
||||
);
|
||||
if (alreadyStale) continue;
|
||||
|
||||
const isEnhancement = issue.labels?.some(
|
||||
(l: any) => l.name === "enhancement"
|
||||
);
|
||||
const thumbsUp = issue.reactions?.["+1"] ?? 0;
|
||||
if (thumbsUp >= STALE_UPVOTE_THRESHOLD) continue;
|
||||
if (isEnhancement && thumbsUp >= STALE_UPVOTE_THRESHOLD) continue;
|
||||
|
||||
const base = `/repos/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
@@ -105,11 +115,6 @@ async function closeExpired(owner: string, repo: string) {
|
||||
|
||||
for (const issue of issues) {
|
||||
if (issue.pull_request) continue;
|
||||
if (issue.locked) continue;
|
||||
|
||||
const thumbsUp = issue.reactions?.["+1"] ?? 0;
|
||||
if (thumbsUp >= STALE_UPVOTE_THRESHOLD) continue;
|
||||
|
||||
const base = `/repos/${owner}/${repo}/issues/${issue.number}`;
|
||||
|
||||
const events = await githubRequest<any[]>(`${base}/events?per_page=100`);
|
||||
@@ -121,22 +126,6 @@ async function closeExpired(owner: string, repo: string) {
|
||||
|
||||
if (!labeledAt || labeledAt > cutoff) continue;
|
||||
|
||||
// Skip if a non-bot user commented after the label was applied.
|
||||
// The triage workflow should remove lifecycle labels on human
|
||||
// activity, but check here too as a safety net.
|
||||
const comments = await githubRequest<any[]>(
|
||||
`${base}/comments?since=${labeledAt.toISOString()}&per_page=100`
|
||||
);
|
||||
const hasHumanComment = comments.some(
|
||||
(c) => c.user && c.user.type !== "Bot"
|
||||
);
|
||||
if (hasHumanComment) {
|
||||
console.log(
|
||||
`#${issue.number}: skipping (human activity after ${label} label)`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (DRY_RUN) {
|
||||
const age = Math.floor((Date.now() - labeledAt.getTime()) / 86400000);
|
||||
console.log(`#${issue.number}: would close (${label}, ${age}d old) — ${issue.title}`);
|
||||
@@ -155,14 +144,20 @@ async function closeExpired(owner: string, repo: string) {
|
||||
|
||||
// --
|
||||
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER;
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME;
|
||||
if (!owner || !repo)
|
||||
throw new Error("GITHUB_REPOSITORY_OWNER and GITHUB_REPOSITORY_NAME required");
|
||||
async function main() {
|
||||
const owner = process.env.GITHUB_REPOSITORY_OWNER;
|
||||
const repo = process.env.GITHUB_REPOSITORY_NAME;
|
||||
if (!owner || !repo)
|
||||
throw new Error("GITHUB_REPOSITORY_OWNER and GITHUB_REPOSITORY_NAME required");
|
||||
|
||||
if (DRY_RUN) console.log("DRY RUN — no changes will be made\n");
|
||||
if (DRY_RUN) console.log("DRY RUN — no changes will be made\n");
|
||||
|
||||
const labeled = await markStale(owner, repo);
|
||||
const closed = await closeExpired(owner, repo);
|
||||
const labeled = await markStale(owner, repo);
|
||||
const closed = await closeExpired(owner, repo);
|
||||
|
||||
console.log(`\nDone: ${labeled} ${DRY_RUN ? "would be labeled" : "labeled"} stale, ${closed} ${DRY_RUN ? "would be closed" : "closed"}`);
|
||||
console.log(`\nDone: ${labeled} ${DRY_RUN ? "would be labeled" : "labeled"} stale, ${closed} ${DRY_RUN ? "would be closed" : "closed"}`);
|
||||
}
|
||||
|
||||
main().catch(console.error);
|
||||
|
||||
export {};
|
||||
|
||||
Reference in New Issue
Block a user