Compare commits

...

3 Commits

Author SHA1 Message Date
Morgan Lunt
5e4a45001d code-modernization: harden writes a patch instead of editing legacy; make map/security guidance language-agnostic
- modernize-harden: never edits legacy/ anymore. Writes findings plus a
  reviewed unified diff to analysis/<system>/security_remediation.patch.
  A second security-auditor pass reviews each hunk (RESOLVES / PARTIAL /
  INTRODUCES-RISK) before presenting. The user reviews and applies the
  patch deliberately, then re-runs to verify. This makes every command
  consistent with the recommended deny Edit(legacy/**) workspace setting,
  so the README's exception note is gone.
- modernize-map: restructure the parse-target list around three stack-
  agnostic principles (dispatcher targets are variables; code-storage
  joins live in config; entry points live in deployment descriptors), with
  COBOL/Java/web/CLI examples on equal footing rather than COBOL-dominant.
  Same protections against false dead-code findings, less stack-specific.
- security-auditor agent: rephrase coverage items in stack-neutral terms
  (record layouts/temp datasets, resource ACLs, deployment scripts/job
  definitions, batch input records) so the checklist reads naturally for
  COBOL, Java EE, .NET, and web targets alike.
- README: drop the harden exception note; describe the patch workflow.
2026-05-11 16:46:03 -07:00
Morgan Lunt
22a1b25977 Harden code-modernization plugin from a real CardDemo dry run
Fixes found by running the discovery workflow against the AWS CardDemo
mainframe sample (~50 KLOC of COBOL/CICS/JCL/BMS/VSAM):

- modernize-assess: add scc -> cloc -> find/wc fallback chain with the
  COCOMO-II formula so Step 1 works when scc isn't installed; same for
  portfolio-mode cloc/lizard. Drop the reference to a specific
  agent-spawning tool name (just "in parallel"). Sharpen the structural-
  map subagent prompt: 5-12 domains, subgraph clustering, ~40-edge cap,
  repo-relative paths, dangling-reference check.
- modernize-map: expand the parse-target list with the things a
  literal-minded reader would miss on a real mainframe codebase — CICS
  CSD DEFINE TRANSACTION/FILE for entry points and online file I/O,
  EXEC CICS file ops, SELECT...ASSIGN TO joined with JCL DD,
  EXEC SQL table refs (not JCL DD), SEND/RECEIVE MAP, dynamic
  data-name XCTL resolution, COBOL fixed-format column slicing. Without
  these the dead-code list is wrong (most CICS programs look unreachable).
  Also write a machine-readable topology.json alongside the summary.
- modernize-extract-rules: add a Priority (P0/P1/P2) field with a
  heuristic, and an optional Suspected-defect field. modernize-brief
  reads P0 rules to build the behavior contract, but the Rule Card had
  no priority slot — the chain was broken.
- modernize-brief: read the new P0 tags; flag low-confidence P0 rules as
  SME blockers.
- modernize-reimagine: drop "for the demo" wording.
- security-auditor agent: add mainframe/COBOL coverage items (RACF,
  JCL/PROC creds, BMS field validation, DB2 dynamic SQL, copybook PII)
  and mark web-only items as such so it adapts to the target stack.
- README: add Optional Tooling section and a symlink example for the
  expected layout.
2026-05-11 16:28:27 -07:00
Morgan Lunt
718818146e Fix code-modernization plugin: align README with commands, fix pipeline gaps
- modernize-brief: read TOPOLOGY.html (what modernize-map actually
  produces) instead of nonexistent TOPOLOGY.md, and tell the user which
  command produces each missing input.
- README: rewrite the Commands section to match actual command behavior —
  correct output filenames, ordering (brief is the synthesis/approval gate
  after discovery, not the first step), agent attributions, and required
  args. Add a workspace-layout note and an explicit callout that
  modernize-harden edits legacy/, which conflicts with the recommended
  deny rule. Reconcile the Overview and Typical Workflow sequences.
- modernize-assess: generalize the production-runtime overlay step so it
  no longer assumes a specific MCP server/tool; mark it optional. Fix
  app/jcl/ -> legacy/$1/jcl/ for layout consistency.
- modernize-map: make TOPOLOGY.html self-contained (load Mermaid from a
  CDN) so it renders in any browser; drop assumptions about an external
  artifact renderer. Generalize the telemetry annotation note.
- business-rules-extractor agent: fix command cross-reference to the
  actual command name.
- plugin.json: include the brief step in the workflow description.
2026-05-11 16:17:59 -07:00
10 changed files with 217 additions and 107 deletions

View File

@@ -1,6 +1,6 @@
{
"name": "code-modernization",
"description": "Modernize legacy codebases (COBOL, legacy Java/C++, monolith web apps) with a structured assess → map → extract-rules → reimaginetransform → harden workflow and specialist review agents",
"description": "Modernize legacy codebases (COBOL, legacy Java/C++, monolith web apps) with a structured assess → map → extract-rules → brief → reimagine/transform → harden workflow and specialist review agents",
"author": {
"name": "Anthropic",
"email": "support@anthropic.com"

View File

@@ -7,43 +7,55 @@ A structured workflow and set of specialist agents for modernizing legacy codeba
Legacy modernization fails most often not because the target technology is wrong, but because teams skip steps: they transform code before understanding it, reimagine architecture before extracting business rules, or ship without a harness that would catch behavior drift. This plugin enforces a sequence:
```
assess → map → extract-rules → reimagine transform → harden
assess → map → extract-rules → brief → reimagine | transform → harden
```
Each step has a dedicated slash command. Specialist agents (legacy analyst, business rules extractor, architecture critic, security auditor, test engineer) are invoked from within those commands — or directly — to keep the work honest.
The discovery commands (`assess`, `map`, `extract-rules`) build artifacts under `analysis/<system>/`. The `brief` command synthesizes them into an approval gate. The build commands (`reimagine`, `transform`) write new code under `modernized/`. The `harden` command audits the legacy system and produces a reviewable remediation patch. Each step has a dedicated slash command, and specialist agents (legacy analyst, business rules extractor, architecture critic, security auditor, test engineer) are invoked from within those commands — or directly — to keep the work honest.
## Expected layout
Commands take a `<system-dir>` argument and assume the system being modernized lives at `legacy/<system-dir>/`. Discovery artifacts go to `analysis/<system-dir>/`, transformed code to `modernized/<system-dir>/…`. If your codebase lives elsewhere, symlink it in:
```bash
mkdir -p legacy && ln -s /path/to/your/legacy/codebase legacy/billing
```
## Optional tooling
`/modernize-assess` works best with [`scc`](https://github.com/boyter/scc) (LOC + complexity + COCOMO) or [`cloc`](https://github.com/AlDanial/cloc), and falls back to `find`/`wc` if neither is installed. Portfolio mode also benefits from [`lizard`](https://github.com/terryyin/lizard) (cyclomatic complexity). The commands degrade gracefully without them, but the metrics will be coarser.
## Commands
The commands are designed to be run in order, but each produces a standalone artifact so you can stop, review, and resume.
### `/modernize-brief`
Capture the modernization brief: what's being modernized, why now, constraints (regulatory, data, runtime), non-goals, and success criteria. Produces `analysis/brief.md`. Run this first.
### `/modernize-assess <system-dir>` — or — `/modernize-assess --portfolio <parent-dir>`
Inventory the legacy codebase: languages, line counts, complexity, build system, integrations, technical debt, security posture, documentation gaps, and a COCOMO-derived effort estimate. Produces `analysis/<system>/ASSESSMENT.md` and `analysis/<system>/ARCHITECTURE.mmd`. Spawns `legacy-analyst` (×2) and `security-auditor` in parallel for deep reads. With `--portfolio`, sweeps every subdirectory of a parent directory and writes a sequencing heat-map to `analysis/portfolio.html`.
### `/modernize-assess`
Inventory the legacy codebase: languages, line counts, module boundaries, external integrations, build system, test coverage, known pain points. Produces `analysis/assessment.md`. Uses the `legacy-analyst` agent for deep reads on unfamiliar dialects.
### `/modernize-map <system-dir>`
Build a dependency and topology map of the **legacy** system: program/module call graph, data lineage (programs ↔ data stores), entry points, dead-end candidates, and one traced critical-path business flow. Writes a re-runnable extraction script and produces `analysis/<system>/topology.json` (machine-readable), `analysis/<system>/TOPOLOGY.html` (rendered Mermaid + architect observations), and standalone `call-graph.mmd`, `data-lineage.mmd`, and `critical-path.mmd`.
### `/modernize-map`
Map the legacy structure onto a target architecture: which legacy modules become which target services/packages, data-flow diagrams, migration sequencing. Produces `analysis/map.md`. Uses the `architecture-critic` agent to pressure-test the design.
### `/modernize-extract-rules <system-dir> [module-pattern]`
Mine the business rules embedded in the legacy code — calculations, validations, eligibility, state transitions, policies — into Given/When/Then "Rule Cards" with `file:line` citations and confidence ratings. Spawns three `business-rules-extractor` agents in parallel (calculations, validations, lifecycle). Produces `analysis/<system>/BUSINESS_RULES.md` and `analysis/<system>/DATA_OBJECTS.md`.
### `/modernize-extract-rules`
Extract business rules from the legacy code — the rules that are encoded in procedural logic, COBOL copybooks, stored procedures, or config files — into human-readable form with citations back to source. Produces `analysis/rules.md`. Uses the `business-rules-extractor` agent.
### `/modernize-brief <system-dir> [target-stack]`
Synthesize the discovery artifacts into a phased **Modernization Brief** — the single document a steering committee approves and engineering executes: target architecture, strangler-fig phase plan with entry/exit criteria, behavior contract, validation strategy, open questions, and an approval block. Reads `ASSESSMENT.md`, `TOPOLOGY.html`, and `BUSINESS_RULES.md` and **stops if any are missing** — run the discovery commands first. Produces `analysis/<system>/MODERNIZATION_BRIEF.md` and enters plan mode as a human-in-the-loop gate.
### `/modernize-reimagine`
Propose the target design: APIs, data model, runtime. Explicitly list what changes from legacy and what stays identical. Produces `analysis/design.md`. Uses the `architecture-critic` agent to challenge over-engineering.
### `/modernize-reimagine <system-dir> <target-vision>`
Greenfield rebuild from extracted intent rather than a structural port. Mines a spec (`analysis/<system>/AI_NATIVE_SPEC.md`), designs a target architecture and has it adversarially reviewed (`analysis/<system>/REIMAGINED_ARCHITECTURE.md`), then **scaffolds services with executable acceptance tests** under `modernized/<system>-reimagined/` and writes a `CLAUDE.md` knowledge handoff for the new system. Two human-in-the-loop checkpoints. Spawns `business-rules-extractor`, `legacy-analyst` (×2), `architecture-critic`, and general-purpose scaffolding agents.
### `/modernize-transform`
Do the actual code transformation — module by module. Writes to `modernized/`. Pairs each transformed module with a test suite that pins the pre-transform behavior.
### `/modernize-transform <system-dir> <module> <target-stack>`
Surgical, single-module strangler-fig rewrite. Plans first (HITL gate), then writes characterization tests via `test-engineer`, then an idiomatic target implementation under `modernized/<system>/<module>/`, proves equivalence by running the tests, and produces `TRANSFORMATION_NOTES.md` mapping legacy → modern with deliberate deviations called out. Reviewed by `architecture-critic`.
### `/modernize-harden`
Post-transform review pass: security audit, test coverage, error handling, observability. Uses `security-auditor` and `test-engineer` agents. Produces a findings report ranked Blocker / High / Medium / Nit.
### `/modernize-harden <system-dir>`
Security hardening pass on the **legacy** system: OWASP/CWE scan, dependency CVEs, secrets, injection. Spawns `security-auditor`. Produces `analysis/<system>/SECURITY_FINDINGS.md` ranked Critical / High / Medium / Low and a reviewed `analysis/<system>/security_remediation.patch` with minimal fixes for the Critical/High findings. The patch is reviewed by a second `security-auditor` pass before you see it. **Never edits `legacy/`** — you review and apply the patch yourself when ready, then re-run to verify. Useful as a pre-modernization step when the legacy system will keep running in production during the migration.
## Agents
- **`legacy-analyst`** — Reads legacy code (COBOL, legacy Java/C++, procedural PHP, classic ASP) and produces structured summaries. Good at spotting implicit dependencies, copybook inheritance, and "JOBOL" patterns (procedural code wearing a modern syntax).
- **`business-rules-extractor`** — Extracts business rules from procedural code with source citations. Each rule includes: what, where it's implemented, which conditions fire it, and any corner cases hidden in data.
- **`architecture-critic`** — Adversarial reviewer for target architectures and transformed code. Default stance is skeptical: asks "do we actually need this?" Flags microservices-for-the-resume, ceremonial error handling, abstractions with one implementation.
- **`security-auditor`** — Reviews transformed code for auth, input validation, secret handling, and dependency CVEs. Tuned for the kinds of issues that appear when translating security primitives across stacks (e.g., session handling from servlet to stateless JWT).
- **`test-engineer`** — Audits test suites for behavior-pinning vs. coverage-theater. Flags tests that exercise code paths without asserting outcomes.
- **`legacy-analyst`** — Reads legacy code (COBOL, legacy Java/C++, procedural PHP, classic ASP) and produces structured summaries. Good at spotting implicit dependencies, copybook inheritance, and "JOBOL" patterns (procedural code wearing a modern syntax). Used by `assess` and `reimagine`.
- **`business-rules-extractor`** — Extracts business rules from procedural code with source citations. Each rule includes: what, where it's implemented, which conditions fire it, and any corner cases hidden in data. Used by `extract-rules` and `reimagine`.
- **`architecture-critic`** — Adversarial reviewer for target architectures and transformed code. Default stance is skeptical: asks "do we actually need this?" Flags microservices-for-the-resume, ceremonial error handling, abstractions with one implementation. Used by `reimagine` and `transform`.
- **`security-auditor`** — Reviews code for auth, input validation, secret handling, and dependency CVEs. Tuned for the kinds of issues that appear when translating security primitives across stacks (e.g., session handling from servlet to stateless JWT). Used by `assess` and `harden`.
- **`test-engineer`** — Writes characterization, contract, and equivalence tests that pin legacy behavior so transformation can be proven correct. Flags tests that exercise code paths without asserting outcomes. Used by `transform`.
## Installation
@@ -75,31 +87,31 @@ This plugin ships commands and agents, but modernization projects benefit from a
}
```
Adjust `legacy/` and `modernized/` to match your actual layout. The key invariants: `Edit` under `legacy/` is denied, and writes are scoped to `analysis/` (for documents) and `modernized/` (for the new code).
Adjust `legacy/` and `modernized/` to match your actual layout. The key invariants: `Edit` under `legacy/` is denied, and writes are scoped to `analysis/` (for documents) and `modernized/` (for the new code). Every command in this plugin respects this — `/modernize-harden` writes a patch to `analysis/` rather than editing `legacy/` in place.
## Typical Workflow
```bash
# 1. Write the brief — what are we modernizing and why?
/modernize-brief
# 1. Inventory the legacy system (or sweep a portfolio of them)
/modernize-assess billing
# 2. Inventory the legacy code
/modernize-assess
# 2. Map call graph, data lineage, and the critical path
/modernize-map billing
# 3. Extract business rules before touching the code
/modernize-extract-rules
# 3. Extract business rules into testable Rule Cards
/modernize-extract-rules billing
# 4. Map legacy structure to target
/modernize-map
# 4. Synthesize the approved Modernization Brief (human-in-the-loop gate)
/modernize-brief billing java-spring
# 5. Propose the target design and review it
/modernize-reimagine
# 5a. Greenfield rebuild from the extracted spec…
/modernize-reimagine billing "event-driven services on Java 21 / Spring Boot"
# 6. Transform module by module
/modernize-transform
# 5b. …or transform module by module (strangler fig)
/modernize-transform billing interest-calc java-spring
# 7. Harden: security, tests, observability
/modernize-harden
# 6. Security-harden the legacy system that's still in production
/modernize-harden billing
```
## License

View File

@@ -42,5 +42,5 @@ of the technology, skip it.
## Output format
One "Rule Card" per rule (see the format in the modernize:extract-rules
One "Rule Card" per rule (see the format in the `/modernize-extract-rules`
command). Group by category. Lead with a summary table.

View File

@@ -11,20 +11,29 @@ engineer can fix.
## Coverage checklist
Work through systematically:
Adapt to the target stack — web items don't apply to a batch system,
terminal/screen items don't apply to a SPA. Work through what's relevant:
- **Injection** (SQL, NoSQL, OS command, LDAP, XPath, template) — trace every
user-controlled input to every sink
user-controlled input to every sink, including dynamic SQL and shell-outs
- **Authentication / session** — hardcoded creds, weak session handling,
missing auth checks on sensitive routes
- **Sensitive data exposure** — secrets in source, weak crypto, PII in logs
- **Access control** — IDOR, missing ownership checks, privilege escalation paths
- **XSS / CSRF** — unescaped output, missing tokens
- **Insecure deserialization** — pickle/yaml.load/ObjectInputStream on
untrusted data
missing auth checks on sensitive routes/transactions/jobs
- **Sensitive data exposure** — secrets in source, weak crypto, PII in logs,
cleartext sensitive data in record layouts, flat files, or temp datasets
- **Access control** — IDOR, missing ownership checks, privilege escalation;
missing/permissive resource ACLs (RACF profiles, IAM policies, file perms);
unguarded admin functions
- **XSS / CSRF** — unescaped output, missing tokens (web targets)
- **Insecure deserialization** — untrusted data into pickle/yaml.load/
`ObjectInputStream` or custom record parsers
- **Vulnerable dependencies** — run `npm audit` / `pip-audit` /
read manifests and flag versions with known CVEs
- **SSRF / path traversal / open redirect**
- **Security misconfiguration** — debug mode, verbose errors, default creds
- **SSRF / path traversal / open redirect** (web/network targets)
- **Input validation** — missing length/range/format checks at trust
boundaries (form/screen fields, API params, batch input records) before
persistence or downstream calls
- **Security misconfiguration** — debug mode, verbose errors, default creds,
hardcoded credentials in deployment scripts, job definitions, or config
## Tooling

View File

@@ -23,6 +23,10 @@ cloc --quiet --csv <parent>/<sys> # LOC by language
lizard -s cyclomatic_complexity <parent>/<sys> 2>/dev/null | tail -1
```
If `cloc`/`lizard` are not installed, fall back to `scc <parent>/<sys>`
(LOC + complexity) or `find` + `wc -l` grouped by extension, and estimate
complexity by counting decision keywords per file. Note which tool you used.
Capture: total SLOC, dominant language, file count, mean & max
cyclomatic complexity (CCN). For dependency freshness, locate the
manifest (`package.json`, `pom.xml`, `*.csproj`, `requirements*.txt`,
@@ -69,6 +73,17 @@ scc legacy/$1
Then run `scc --by-file -s complexity legacy/$1 | head -25` to identify the
highest-complexity files. Capture the COCOMO effort/cost estimate scc provides.
If `scc` is not installed, fall back in order:
1. `cloc legacy/$1` for the LOC table, then compute COCOMO-II effort
yourself: `PM = 2.94 × (KSLOC)^1.10` (nominal scale factors). Show the
inputs.
2. If `cloc` is also missing, use `find` + `wc -l` grouped by extension
for LOC, and rank file complexity by counting decision keywords
(`IF`/`EVALUATE`/`WHEN`/`PERFORM` for COBOL; `if`/`for`/`while`/`case`/
`catch` for C-family). Compute COCOMO from KSLOC as above.
Note in the assessment which tool was used so the figures are reproducible.
## Step 2 — Technology fingerprint
Identify, with file evidence:
@@ -80,12 +95,15 @@ Identify, with file evidence:
## Step 3 — Parallel deep analysis
Spawn three subagents **concurrently** using the Task tool:
Spawn three subagents **in parallel**:
1. **legacy-analyst** — "Build a structural map of legacy/$1: what are the
5-10 major functional domains, which source files belong to each, and how
do they depend on each other? Return a markdown table + a Mermaid
`graph TD` of domain-level dependencies. Cite file paths."
5-12 major functional domains (group optional/feature-gated subsystems
under one umbrella), which source files belong to each, and how do they
depend on each other (control flow + shared data)? Return a markdown
table + a Mermaid `graph TD` of domain-level dependencies — use
`subgraph` to cluster and cap at ~40 edges. Cite repo-relative file
paths. Flag dangling references (defined but no source, or unused)."
2. **legacy-analyst** — "Identify technical debt in legacy/$1: dead code,
deprecated APIs, copy-paste duplication, god objects/programs, missing
@@ -99,20 +117,21 @@ Spawn three subagents **concurrently** using the Task tool:
Wait for all three. Synthesize their findings.
## Step 4 — Production runtime overlay (observability)
## Step 4 — Production runtime overlay (optional)
If the system has batch jobs (e.g. JCL members under `app/jcl/`), call the
`observability` MCP tool `get_batch_runtimes` for each business-relevant
job name (interest, posting, statement, reporting). Use the returned
p50/p95/p99 and 90-day series to:
If production telemetry is available — an observability/APM MCP server, batch
job logs, or runtime exports the user can supply — gather p50/p95/p99
wall-clock for the system's key jobs/transactions (e.g. JCL members under
`legacy/$1/jcl/`, scheduled batches, top API routes). Use it to:
- Tag each functional domain from Step 3 with its production wall-clock
cost and **p99 variance** (p99/p50 ratio).
- Flag the highest-variance domain as the highest operational risk —
this is telemetry-grounded, not a static-analysis opinion.
Include a small **Batch Runtime** table (Job · Domain · p50 · p95 · p99 ·
p99/p50) in the assessment.
Include a small **Runtime Profile** table (Job/Route · Domain · p50 · p95 ·
p99 · p99/p50) in the assessment. If no telemetry is available, skip this
step and note the gap in the assessment.
## Step 5 — Documentation gap analysis
@@ -126,7 +145,7 @@ Create `analysis/$1/ASSESSMENT.md` with these sections:
- **Executive Summary** (3-4 sentences: what it is, how big, how risky, headline recommendation)
- **System Inventory** (the scc table + tech fingerprint)
- **Architecture-at-a-Glance** (the domain table; reference the diagram)
- **Production Runtime Profile** (the batch-runtime table from Step 4, with the highest-variance domain called out)
- **Production Runtime Profile** (the runtime table from Step 4 with the highest-variance domain called out — or "no telemetry available")
- **Technical Debt** (top 10, ranked)
- **Security Findings** (CWE table)
- **Documentation Gaps** (top 5)

View File

@@ -8,8 +8,10 @@ single document a steering committee approves and engineering executes.
Target stack: `$2` (if blank, recommend one based on the assessment findings).
Read `analysis/$1/ASSESSMENT.md`, `TOPOLOGY.md`, and `BUSINESS_RULES.md` first.
If any are missing, say so and stop.
Read `analysis/$1/ASSESSMENT.md`, `analysis/$1/TOPOLOGY.html` (and the `.mmd`
files alongside it), and `analysis/$1/BUSINESS_RULES.md` first. If any are
missing, say so and stop — they come from `/modernize-assess`, `/modernize-map`,
and `/modernize-extract-rules` respectively. Run those first.
## The Brief
@@ -35,8 +37,11 @@ fewest-dependencies first. For each phase:
Render the phases as a Mermaid `gantt` chart.
### 4. Behavior Contract
List the **P0 behaviors** from BUSINESS_RULES.md that MUST be proven
equivalent before any phase ships. These become the regression suite.
List the **P0 rules** from BUSINESS_RULES.md (the ones tagged `Priority: P0`
money, regulatory, data integrity) that MUST be proven equivalent before any
phase ships. These become the regression suite. Flag any P0 rule with
Confidence < High as a blocker requiring SME confirmation before its phase
starts.
### 5. Validation Strategy
State which combination applies: characterization tests, contract tests,

View File

@@ -38,6 +38,7 @@ Merge the three result sets. Deduplicate. For each distinct rule, write a
```
### RULE-NNN: <plain-English name>
**Category:** Calculation | Validation | Lifecycle | Policy
**Priority:** P0 | P1 | P2
**Source:** `path/to/file.ext:line-line`
**Plain English:** One sentence a business analyst would recognize.
**Specification:**
@@ -47,11 +48,18 @@ Merge the three result sets. Deduplicate. For each distinct rule, write a
[And <additional outcome>]
**Parameters:** <constants, rates, thresholds with their current values>
**Edge cases handled:** <list>
**Confidence:** High | Medium | Low — <why>
**Suspected defect:** <optional — legacy behavior that looks wrong; decide preserve-vs-fix during transform>
**Confidence:** High | Medium | Low — <why; if < High, state the exact SME question>
```
Priority heuristic — default to **P1**. Assign **P0** if the rule moves money,
enforces a regulatory/compliance requirement, or guards data integrity (and
flag P0 rules at <High confidence as SME-required). Assign **P2** for
display/formatting/convenience rules. The downstream `/modernize-brief`
behavior contract is built from the P0 rules, so assign deliberately.
Write all rule cards to `analysis/$1/BUSINESS_RULES.md` with:
- A summary table at top (ID, name, category, source, confidence)
- A summary table at top (ID, name, category, priority, source, confidence)
- Rule cards grouped by category
- A final **"Rules requiring SME confirmation"** section listing every
Medium/Low confidence rule with the specific question a human needs to answer

View File

@@ -1,23 +1,26 @@
---
description: Security vulnerability scan + remediation — OWASP, CVE, secrets, injection
description: Security vulnerability scan with a reviewable remediation patch — OWASP, CWE, CVE, secrets, injection
argument-hint: <system-dir>
---
Run a **security hardening pass** on `legacy/$1`: find vulnerabilities, rank
them, and fix the critical ones.
them, and produce a reviewable patch for the critical ones.
This command never edits `legacy/` — it writes findings and a proposed patch
to `analysis/$1/`. The user reviews and applies (or not).
## Scan
Spawn the **security-auditor** subagent:
"Adversarially audit legacy/$1 for security vulnerabilities. Cover:
OWASP Top 10 (injection, broken auth, XSS, SSRF, etc.), hardcoded secrets,
vulnerable dependency versions (check package manifests against known CVEs),
missing input validation, insecure deserialization, path traversal.
For each finding return: CWE ID, severity (Critical/High/Med/Low), file:line,
one-sentence exploit scenario, and recommended fix. Also run any available
SAST tooling (npm audit, pip-audit, OWASP dependency-check) and include
its raw output."
"Adversarially audit legacy/$1 for security vulnerabilities. Cover what's
relevant to the stack: injection (SQL/NoSQL/OS command/template), broken
auth, sensitive data exposure, access control gaps, insecure deserialization,
hardcoded secrets, vulnerable dependency versions, missing input validation,
path traversal. For each finding return: CWE ID, severity
(Critical/High/Med/Low), file:line, one-sentence exploit scenario, and
recommended fix. Run any available SAST tooling (npm audit, pip-audit,
OWASP dependency-check) and include its raw output."
## Triage
@@ -28,19 +31,34 @@ Write `analysis/$1/SECURITY_FINDINGS.md`:
## Remediate
For each **Critical** and **High** finding, fix it directly in the source.
Make minimal, targeted changes. After each fix, add a one-line entry under
"Remediation Log" in SECURITY_FINDINGS.md: finding ID → commit-style summary
of what changed.
For each **Critical** and **High** finding, draft a minimal, targeted fix.
Do **not** edit `legacy/` — write all fixes as a single unified diff to
`analysis/$1/security_remediation.patch`, with a comment line above each
hunk citing the finding ID it addresses (`# SEC-001: parameterize the query`).
Show the cumulative diff:
```bash
git -C legacy/$1 diff
```
Add a **Remediation Log** section to SECURITY_FINDINGS.md mapping each
finding ID → one-line summary of the proposed fix and the patch hunk that
implements it.
## Verify
Re-run the security-auditor against the patched code to confirm the
Critical/High findings are resolved. Update the scorecard with before/after.
Spawn the **security-auditor** again to **review the patch** against the
original code:
"Review analysis/$1/security_remediation.patch against legacy/$1. For each
hunk: does it fully remediate the cited finding? Does it introduce new
vulnerabilities or change behavior beyond the fix? Return one verdict per
hunk: RESOLVES / PARTIAL / INTRODUCES-RISK, with a one-line reason."
Add a **Patch Review** section to SECURITY_FINDINGS.md with the verdicts.
If any hunk is PARTIAL or INTRODUCES-RISK, revise the patch and re-review.
## Present
Tell the user the artifacts are ready:
- `analysis/$1/SECURITY_FINDINGS.md` — findings, remediation log, patch review
- `analysis/$1/security_remediation.patch` — review, then apply if appropriate
with `git -C legacy/$1 apply ../../analysis/$1/security_remediation.patch`
- Re-run `/modernize-harden $1` after applying to confirm resolution
Suggest: `glow -p analysis/$1/SECURITY_FINDINGS.md`

View File

@@ -11,31 +11,69 @@ connect? This is the map an engineer needs before touching anything.
## What to produce
Write a one-off analysis script (Python or shell — your choice) that parses
the source under `legacy/$1` and extracts:
the source under `legacy/$1` and extracts the four datasets below. Three
principles apply across stacks; getting them wrong produces a misleading map:
- **Program/module call graph** — who calls whom (for COBOL: `CALL` statements
and CICS `LINK`/`XCTL`; for Java: class-level imports/invocations; for Node:
`require`/`import`)
- **Data dependency graph** — which programs read/write which data stores
(COBOL: copybooks + VSAM/DB2 in JCL DD statements; Java: JPA entities/tables;
Node: model files)
- **Entry points** — batch jobs, transaction IDs, HTTP routes, CLI commands
- **Dead-end candidates** — modules with no inbound edges (potential dead code)
1. **Edges live in two places**direct calls in source, *and* dispatcher/
router calls whose targets are variables (config tables, route maps,
dependency injection, dynamic dispatch). Resolve variables against config
before declaring an edge unresolvable.
2. **The code↔storage join is usually external configuration**, not source —
job/deployment descriptors map logical names to physical stores.
3. **Entry points usually live in deployment config**, not source — without
parsing it, every top-level module looks unreachable.
Extract:
- **Program/module call graph** — direct calls (`CALL`, method invocations,
`import`/`require`) *and* dispatcher calls (`EXEC CICS LINK/XCTL`, DI
container wiring, framework routing, reflection/factory). Resolve variable
call targets against route tables, copybooks, config, or constant pools.
- **Data dependency graph** — which modules read/write which data stores,
joined through the relevant config: `SELECT…ASSIGN TO` ↔ JCL `DD` (batch
COBOL), `EXEC CICS READ/WRITE…FILE()` ↔ CSD `DEFINE FILE` (CICS online),
`EXEC SQL` table refs (embedded SQL), ORM annotations/mappings (Java/.NET),
model files (Node/Python/Ruby). Include UI/screen bindings (BMS maps, JSPs,
templates) — they're dependencies too.
- **Entry points** — whatever the stack's outermost invoker is, read from
where it's defined: JCL `EXEC PGM=` and CICS CSD `DEFINE TRANSACTION`
(mainframe), `web.xml`/route annotations/route files (web), `main()`/argv
parsing (CLI), queue/scheduler subscriptions (event-driven).
- **Dead-end candidates** — modules with no inbound edges. **Only meaningful
once all the entry-point and call-edge types above are in the graph.**
Suppress the dead claim for anything that could be the target of an
unresolved dynamic call. A grep-only graph will mark most dispatcher-driven
modules (CICS programs, Spring controllers, ORM-bound DAOs) dead when they
aren't.
If the source is fixed-column (COBOL columns 872, RPG, etc.), slice the
code area and strip comment lines before regex matching, or you'll match
sequence numbers and commented-out code.
Save the script as `analysis/$1/extract_topology.py` (or `.sh`) so it can be
re-run and audited. Run it. Show the raw output.
re-run and audited. Have it write a machine-readable
`analysis/$1/topology.json` and print a human summary. Run it; show the
summary (cap at ~200 lines for very large estates).
## Render
From the extracted data, generate **three Mermaid diagrams** and write them
to `analysis/$1/TOPOLOGY.html` so the artifact pane renders them live.
to `analysis/$1/TOPOLOGY.html` as a self-contained page that renders in any
browser.
The HTML page must use: dark `#1e1e1e` background, `#d4d4d4` text,
`#cc785c` for `<h2>`/accents, `system-ui` font, all CSS **inline** (no
external stylesheets). Each diagram goes in a
`<pre class="mermaid">...</pre>` block — the artifact server loads
mermaid.js and renders client-side. Do **not** wrap diagrams in
markdown ` ``` ` fences inside the HTML.
external stylesheets). Load Mermaid from a CDN in `<head>`:
```html
<script type="module">
import mermaid from 'https://cdn.jsdelivr.net/npm/mermaid@11/dist/mermaid.esm.min.mjs';
mermaid.initialize({ startOnLoad: true, theme: 'dark' });
</script>
```
Each diagram goes in a `<pre class="mermaid">...</pre>` block. Do **not**
wrap diagrams in markdown ` ``` ` fences inside the HTML.
1. **`graph TD` — Module call graph.** Cluster by domain (use `subgraph`).
Highlight entry points in a distinct style. Cap at ~40 nodes — if larger,
@@ -46,9 +84,9 @@ markdown ` ``` ` fences inside the HTML.
3. **`flowchart TD` — Critical path.** Trace ONE end-to-end business flow
(e.g., "monthly billing run" or "process payment") through every program
and data store it touches, in execution order. If the `observability`
MCP server is connected, annotate each batch step with its p50/p99
wall-clock from `get_batch_runtimes`.
and data store it touches, in execution order. If production telemetry is
available (see `/modernize-assess` Step 4), annotate each step with its
p50/p99 wall-clock.
Also export the three diagrams as standalone `.mmd` files for re-use:
`analysis/$1/call-graph.mmd`, `analysis/$1/data-lineage.mmd`,
@@ -63,4 +101,4 @@ touched by too many writers.
## Present
Tell the user to open `analysis/$1/TOPOLOGY.html` in the artifact pane.
Tell the user to open `analysis/$1/TOPOLOGY.html` in a browser.

View File

@@ -57,8 +57,9 @@ Enter plan mode. Present the architecture. Wait for approval.
## Phase E — Parallel scaffolding
For each service in the approved architecture (cap at 3 for the demo), spawn
a **general-purpose agent in parallel**:
For each service in the approved architecture (cap at 3 to keep the run
tractable; tell the user which you deferred), spawn a **general-purpose agent
in parallel**:
"Scaffold the <service-name> service per analysis/$1/REIMAGINED_ARCHITECTURE.md
and AI_NATIVE_SPEC.md. Create: project skeleton, domain model, API stubs