Compare commits

..

4 Commits

Author SHA1 Message Date
Morgan Lunt
5e4a45001d code-modernization: harden writes a patch instead of editing legacy; make map/security guidance language-agnostic
- modernize-harden: never edits legacy/ anymore. Writes findings plus a
  reviewed unified diff to analysis/<system>/security_remediation.patch.
  A second security-auditor pass reviews each hunk (RESOLVES / PARTIAL /
  INTRODUCES-RISK) before presenting. The user reviews and applies the
  patch deliberately, then re-runs to verify. This makes every command
  consistent with the recommended deny Edit(legacy/**) workspace setting,
  so the README's exception note is gone.
- modernize-map: restructure the parse-target list around three stack-
  agnostic principles (dispatcher targets are variables; code-storage
  joins live in config; entry points live in deployment descriptors), with
  COBOL/Java/web/CLI examples on equal footing rather than COBOL-dominant.
  Same protections against false dead-code findings, less stack-specific.
- security-auditor agent: rephrase coverage items in stack-neutral terms
  (record layouts/temp datasets, resource ACLs, deployment scripts/job
  definitions, batch input records) so the checklist reads naturally for
  COBOL, Java EE, .NET, and web targets alike.
- README: drop the harden exception note; describe the patch workflow.
2026-05-11 16:46:03 -07:00
Morgan Lunt
22a1b25977 Harden code-modernization plugin from a real CardDemo dry run
Fixes found by running the discovery workflow against the AWS CardDemo
mainframe sample (~50 KLOC of COBOL/CICS/JCL/BMS/VSAM):

- modernize-assess: add scc -> cloc -> find/wc fallback chain with the
  COCOMO-II formula so Step 1 works when scc isn't installed; same for
  portfolio-mode cloc/lizard. Drop the reference to a specific
  agent-spawning tool name (just "in parallel"). Sharpen the structural-
  map subagent prompt: 5-12 domains, subgraph clustering, ~40-edge cap,
  repo-relative paths, dangling-reference check.
- modernize-map: expand the parse-target list with the things a
  literal-minded reader would miss on a real mainframe codebase — CICS
  CSD DEFINE TRANSACTION/FILE for entry points and online file I/O,
  EXEC CICS file ops, SELECT...ASSIGN TO joined with JCL DD,
  EXEC SQL table refs (not JCL DD), SEND/RECEIVE MAP, dynamic
  data-name XCTL resolution, COBOL fixed-format column slicing. Without
  these the dead-code list is wrong (most CICS programs look unreachable).
  Also write a machine-readable topology.json alongside the summary.
- modernize-extract-rules: add a Priority (P0/P1/P2) field with a
  heuristic, and an optional Suspected-defect field. modernize-brief
  reads P0 rules to build the behavior contract, but the Rule Card had
  no priority slot — the chain was broken.
- modernize-brief: read the new P0 tags; flag low-confidence P0 rules as
  SME blockers.
- modernize-reimagine: drop "for the demo" wording.
- security-auditor agent: add mainframe/COBOL coverage items (RACF,
  JCL/PROC creds, BMS field validation, DB2 dynamic SQL, copybook PII)
  and mark web-only items as such so it adapts to the target stack.
- README: add Optional Tooling section and a symlink example for the
  expected layout.
2026-05-11 16:28:27 -07:00
Morgan Lunt
718818146e Fix code-modernization plugin: align README with commands, fix pipeline gaps
- modernize-brief: read TOPOLOGY.html (what modernize-map actually
  produces) instead of nonexistent TOPOLOGY.md, and tell the user which
  command produces each missing input.
- README: rewrite the Commands section to match actual command behavior —
  correct output filenames, ordering (brief is the synthesis/approval gate
  after discovery, not the first step), agent attributions, and required
  args. Add a workspace-layout note and an explicit callout that
  modernize-harden edits legacy/, which conflicts with the recommended
  deny rule. Reconcile the Overview and Typical Workflow sequences.
- modernize-assess: generalize the production-runtime overlay step so it
  no longer assumes a specific MCP server/tool; mark it optional. Fix
  app/jcl/ -> legacy/$1/jcl/ for layout consistency.
- modernize-map: make TOPOLOGY.html self-contained (load Mermaid from a
  CDN) so it renders in any browser; drop assumptions about an external
  artifact renderer. Generalize the telemetry annotation note.
- business-rules-extractor agent: fix command cross-reference to the
  actual command name.
- plugin.json: include the brief step in the workflow description.
2026-05-11 16:17:59 -07:00
Morgan Westlee Lunt
bdca23e8e4 Add code-modernization plugin
Structured workflow (assess → map → extract-rules → reimagine → transform →
harden) and specialist agents (legacy-analyst, business-rules-extractor,
architecture-critic, security-auditor, test-engineer) for modernizing legacy
codebases into current stacks.
2026-04-24 19:52:02 +00:00
16 changed files with 1184 additions and 16 deletions

View File

@@ -7,22 +7,6 @@
"email": "support@anthropic.com"
},
"plugins": [
{
"name": "42crunch-api-security-testing",
"description": "Automate API security directly in Claude Code with 42Crunch - automatically audit OpenAPI specs, detect vulnerabilities aligned with OWASP API Security risks (including BOLA/BFLA), and apply AI-powered fixes. Designed for AI-assisted development workflows, it provides continuous guardrails through an audit->scan->remediate->validate loop, ensuring APIs meet enterprise security standards before deployment.",
"author": {
"name": "42Crunch"
},
"category": "security",
"source": {
"source": "git-subdir",
"url": "https://github.com/42Crunch-AI/claude-plugins.git",
"path": "plugins/api-security-testing",
"ref": "v1.0.1",
"sha": "56273e0e20762d76640838300a7431c4260cad32"
},
"homepage": "https://42crunch.com"
},
{
"name": "adlc",
"description": "Agentforce Agent Development Life Cycle — author, discover, scaffold, deploy, test, and optimize .agent files",
@@ -451,6 +435,17 @@
},
"homepage": "https://github.com/cockroachdb/claude-plugin"
},
{
"name": "code-modernization",
"description": "Modernize legacy codebases (COBOL, legacy Java/C++, monolith web apps) with a structured assess / map / extract-rules / reimagine / transform / harden workflow and specialist review agents",
"author": {
"name": "Anthropic",
"email": "support@anthropic.com"
},
"source": "./plugins/code-modernization",
"category": "development",
"homepage": "https://github.com/anthropics/claude-plugins-official/tree/main/plugins/code-modernization"
},
{
"name": "code-review",
"description": "Automated code review for pull requests using multiple specialized agents with confidence-based scoring to filter false positives",

View File

@@ -0,0 +1,8 @@
{
"name": "code-modernization",
"description": "Modernize legacy codebases (COBOL, legacy Java/C++, monolith web apps) with a structured assess → map → extract-rules → brief → reimagine/transform → harden workflow and specialist review agents",
"author": {
"name": "Anthropic",
"email": "support@anthropic.com"
}
}

View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,119 @@
# Code Modernization Plugin
A structured workflow and set of specialist agents for modernizing legacy codebases — COBOL, legacy Java/C++, monolith web apps — into current stacks while preserving behavior.
## Overview
Legacy modernization fails most often not because the target technology is wrong, but because teams skip steps: they transform code before understanding it, reimagine architecture before extracting business rules, or ship without a harness that would catch behavior drift. This plugin enforces a sequence:
```
assess → map → extract-rules → brief → reimagine | transform → harden
```
The discovery commands (`assess`, `map`, `extract-rules`) build artifacts under `analysis/<system>/`. The `brief` command synthesizes them into an approval gate. The build commands (`reimagine`, `transform`) write new code under `modernized/`. The `harden` command audits the legacy system and produces a reviewable remediation patch. Each step has a dedicated slash command, and specialist agents (legacy analyst, business rules extractor, architecture critic, security auditor, test engineer) are invoked from within those commands — or directly — to keep the work honest.
## Expected layout
Commands take a `<system-dir>` argument and assume the system being modernized lives at `legacy/<system-dir>/`. Discovery artifacts go to `analysis/<system-dir>/`, transformed code to `modernized/<system-dir>/…`. If your codebase lives elsewhere, symlink it in:
```bash
mkdir -p legacy && ln -s /path/to/your/legacy/codebase legacy/billing
```
## Optional tooling
`/modernize-assess` works best with [`scc`](https://github.com/boyter/scc) (LOC + complexity + COCOMO) or [`cloc`](https://github.com/AlDanial/cloc), and falls back to `find`/`wc` if neither is installed. Portfolio mode also benefits from [`lizard`](https://github.com/terryyin/lizard) (cyclomatic complexity). The commands degrade gracefully without them, but the metrics will be coarser.
## Commands
The commands are designed to be run in order, but each produces a standalone artifact so you can stop, review, and resume.
### `/modernize-assess <system-dir>` — or — `/modernize-assess --portfolio <parent-dir>`
Inventory the legacy codebase: languages, line counts, complexity, build system, integrations, technical debt, security posture, documentation gaps, and a COCOMO-derived effort estimate. Produces `analysis/<system>/ASSESSMENT.md` and `analysis/<system>/ARCHITECTURE.mmd`. Spawns `legacy-analyst` (×2) and `security-auditor` in parallel for deep reads. With `--portfolio`, sweeps every subdirectory of a parent directory and writes a sequencing heat-map to `analysis/portfolio.html`.
### `/modernize-map <system-dir>`
Build a dependency and topology map of the **legacy** system: program/module call graph, data lineage (programs ↔ data stores), entry points, dead-end candidates, and one traced critical-path business flow. Writes a re-runnable extraction script and produces `analysis/<system>/topology.json` (machine-readable), `analysis/<system>/TOPOLOGY.html` (rendered Mermaid + architect observations), and standalone `call-graph.mmd`, `data-lineage.mmd`, and `critical-path.mmd`.
### `/modernize-extract-rules <system-dir> [module-pattern]`
Mine the business rules embedded in the legacy code — calculations, validations, eligibility, state transitions, policies — into Given/When/Then "Rule Cards" with `file:line` citations and confidence ratings. Spawns three `business-rules-extractor` agents in parallel (calculations, validations, lifecycle). Produces `analysis/<system>/BUSINESS_RULES.md` and `analysis/<system>/DATA_OBJECTS.md`.
### `/modernize-brief <system-dir> [target-stack]`
Synthesize the discovery artifacts into a phased **Modernization Brief** — the single document a steering committee approves and engineering executes: target architecture, strangler-fig phase plan with entry/exit criteria, behavior contract, validation strategy, open questions, and an approval block. Reads `ASSESSMENT.md`, `TOPOLOGY.html`, and `BUSINESS_RULES.md` and **stops if any are missing** — run the discovery commands first. Produces `analysis/<system>/MODERNIZATION_BRIEF.md` and enters plan mode as a human-in-the-loop gate.
### `/modernize-reimagine <system-dir> <target-vision>`
Greenfield rebuild from extracted intent rather than a structural port. Mines a spec (`analysis/<system>/AI_NATIVE_SPEC.md`), designs a target architecture and has it adversarially reviewed (`analysis/<system>/REIMAGINED_ARCHITECTURE.md`), then **scaffolds services with executable acceptance tests** under `modernized/<system>-reimagined/` and writes a `CLAUDE.md` knowledge handoff for the new system. Two human-in-the-loop checkpoints. Spawns `business-rules-extractor`, `legacy-analyst` (×2), `architecture-critic`, and general-purpose scaffolding agents.
### `/modernize-transform <system-dir> <module> <target-stack>`
Surgical, single-module strangler-fig rewrite. Plans first (HITL gate), then writes characterization tests via `test-engineer`, then an idiomatic target implementation under `modernized/<system>/<module>/`, proves equivalence by running the tests, and produces `TRANSFORMATION_NOTES.md` mapping legacy → modern with deliberate deviations called out. Reviewed by `architecture-critic`.
### `/modernize-harden <system-dir>`
Security hardening pass on the **legacy** system: OWASP/CWE scan, dependency CVEs, secrets, injection. Spawns `security-auditor`. Produces `analysis/<system>/SECURITY_FINDINGS.md` ranked Critical / High / Medium / Low and a reviewed `analysis/<system>/security_remediation.patch` with minimal fixes for the Critical/High findings. The patch is reviewed by a second `security-auditor` pass before you see it. **Never edits `legacy/`** — you review and apply the patch yourself when ready, then re-run to verify. Useful as a pre-modernization step when the legacy system will keep running in production during the migration.
## Agents
- **`legacy-analyst`** — Reads legacy code (COBOL, legacy Java/C++, procedural PHP, classic ASP) and produces structured summaries. Good at spotting implicit dependencies, copybook inheritance, and "JOBOL" patterns (procedural code wearing a modern syntax). Used by `assess` and `reimagine`.
- **`business-rules-extractor`** — Extracts business rules from procedural code with source citations. Each rule includes: what, where it's implemented, which conditions fire it, and any corner cases hidden in data. Used by `extract-rules` and `reimagine`.
- **`architecture-critic`** — Adversarial reviewer for target architectures and transformed code. Default stance is skeptical: asks "do we actually need this?" Flags microservices-for-the-resume, ceremonial error handling, abstractions with one implementation. Used by `reimagine` and `transform`.
- **`security-auditor`** — Reviews code for auth, input validation, secret handling, and dependency CVEs. Tuned for the kinds of issues that appear when translating security primitives across stacks (e.g., session handling from servlet to stateless JWT). Used by `assess` and `harden`.
- **`test-engineer`** — Writes characterization, contract, and equivalence tests that pin legacy behavior so transformation can be proven correct. Flags tests that exercise code paths without asserting outcomes. Used by `transform`.
## Installation
```
/plugin install code-modernization@claude-plugins-official
```
## Recommended Workspace Setup
This plugin ships commands and agents, but modernization projects benefit from a workspace permission layout that enforces the "never touch legacy, freely edit modernized" rule. A starting-point `.claude/settings.json` for the project directory you're modernizing:
```json
{
"permissions": {
"allow": [
"Bash(git diff:*)",
"Bash(git log:*)",
"Bash(git status:*)",
"Read(**)",
"Write(analysis/**)",
"Write(modernized/**)",
"Edit(analysis/**)",
"Edit(modernized/**)"
],
"deny": [
"Edit(legacy/**)"
]
}
}
```
Adjust `legacy/` and `modernized/` to match your actual layout. The key invariants: `Edit` under `legacy/` is denied, and writes are scoped to `analysis/` (for documents) and `modernized/` (for the new code). Every command in this plugin respects this — `/modernize-harden` writes a patch to `analysis/` rather than editing `legacy/` in place.
## Typical Workflow
```bash
# 1. Inventory the legacy system (or sweep a portfolio of them)
/modernize-assess billing
# 2. Map call graph, data lineage, and the critical path
/modernize-map billing
# 3. Extract business rules into testable Rule Cards
/modernize-extract-rules billing
# 4. Synthesize the approved Modernization Brief (human-in-the-loop gate)
/modernize-brief billing java-spring
# 5a. Greenfield rebuild from the extracted spec…
/modernize-reimagine billing "event-driven services on Java 21 / Spring Boot"
# 5b. …or transform module by module (strangler fig)
/modernize-transform billing interest-calc java-spring
# 6. Security-harden the legacy system that's still in production
/modernize-harden billing
```
## License
Apache 2.0. See `LICENSE`.

View File

@@ -0,0 +1,36 @@
---
name: architecture-critic
description: Reviews proposed target architectures and transformed code against modern best practice. Adversarial — looks for over-engineering, missed requirements, and simpler alternatives.
tools: Read, Glob, Grep, Bash
---
You are a principal engineer reviewing a modernization design or a freshly
transformed module. Your default stance is **skeptical**. The team is excited
about the new shiny; your job is to ask "do we actually need this?"
## Review lens
For **architecture proposals**:
- Does every service boundary correspond to a real domain seam, or is this
microservices-for-the-resume?
- What's the simplest design that meets the stated requirements? How does
the proposal compare?
- Which non-functional requirements (latency, throughput, consistency) are
unstated, and does the design accidentally violate them?
- What's the data migration story? "We'll figure it out" is a finding.
- What happens when service X is down? Trace one failure mode end-to-end.
For **transformed code**:
- Is this idiomatic for the target stack, or is legacy structure leaking
through? (Flag "JOBOL" — procedural Java with COBOL variable names.)
- Is error handling meaningful or ceremonial?
- Are there abstractions with exactly one implementation and no second use
case in sight?
- Does the test suite actually pin behavior, or just exercise code paths?
- What would the on-call engineer need at 3am that isn't here?
## Output
Findings ranked **Blocker / High / Medium / Nit**. Each with: what, where,
why it matters, and a concrete suggested change. End with one paragraph:
"If I could only change one thing, it would be ___."

View File

@@ -0,0 +1,46 @@
---
name: business-rules-extractor
description: Mines domain logic, calculations, validations, and policies from legacy code into testable Given/When/Then specifications. Use when you need to separate "what the business requires" from "how the old code happened to implement it."
tools: Read, Glob, Grep, Bash
---
You are a business analyst who reads code. Your job is to find the **rules**
hidden inside legacy systems — the calculations, thresholds, eligibility
checks, and policies that define how the business actually operates — and
express them in a form that survives the rewrite.
## What counts as a business rule
- **Calculations**: interest, fees, taxes, discounts, scores, aggregates
- **Validations**: required fields, format checks, range limits, cross-field
- **Eligibility / authorization**: who can do what, when, under which conditions
- **State transitions**: status lifecycles, what triggers each transition
- **Policies**: retention periods, retry limits, cutoff times, rounding rules
## What does NOT count
Infrastructure, logging, error handling, UI layout, technical retries,
connection pooling. If a rule would be the same regardless of what language
the system was written in, it's a business rule. If it only exists because
of the technology, skip it.
## Extraction discipline
1. Find the rule in code. Record exact `file:line-line`.
2. State it in plain English a non-engineer would recognize.
3. Encode it as Given/When/Then with **concrete values**:
```
Given an account with balance $1,250.00 and APR 18.5%
When the monthly interest batch runs
Then the interest charged is $19.27 (balance × APR ÷ 12, rounded half-up to cents)
```
4. List the parameters (rates, limits, magic numbers) with their current
hardcoded values — these often need to become configuration.
5. Rate your confidence: **High** (logic is explicit), **Medium** (inferred
from structure/names), **Low** (ambiguous; needs SME).
6. If confidence < High, write the exact question an SME must answer.
## Output format
One "Rule Card" per rule (see the format in the `/modernize-extract-rules`
command). Group by category. Lead with a summary table.

View File

@@ -0,0 +1,39 @@
---
name: legacy-analyst
description: Deep-reads legacy codebases (COBOL, Java, .NET, Node, anything) to build structural and behavioral understanding. Use for discovery, dependency mapping, dead-code detection, and "what does this system actually do" questions.
tools: Read, Glob, Grep, Bash
---
You are a senior legacy systems analyst with 20 years of experience reading
code nobody else wants to read — COBOL, JCL, RPG, classic ASP, EJB 2,
Struts 1, raw servlets, Perl CGI.
Your job is **understanding, not judgment**. The code in front of you kept a
business running for decades. Treat it with respect, figure out what it does,
and explain it in terms a modern engineer can act on.
## How you work
- **Read before you grep.** Open the entry points (main programs, JCL jobs,
controllers, routes) and trace the actual flow. Pattern-matching on names
lies; control flow doesn't.
- **Cite everything.** Every claim gets a `path/to/file:line` reference.
If you can't point to a line, you don't know it — say so.
- **Distinguish "is" from "appears to be."** When you're inferring intent
from structure, flag it: "appears to handle X (inferred from variable
names; no comments confirm)."
- **Use the right vocabulary for the stack.** COBOL has paragraphs,
copybooks, and FD entries. CICS has transactions and BMS maps. JCL has
steps and DD statements. Java has packages and beans. Use the native
terms so SMEs trust your output.
- **Find the data first.** In legacy systems, the data structures (copybooks,
DDL, schemas) are usually more stable and truthful than the procedural
code. Map the data, then map who touches it.
- **Note what's missing.** Unhandled error paths, TODO comments, commented-out
blocks, magic numbers — these are signals about history and risk.
## Output format
Default to structured markdown: tables for inventories, Mermaid for graphs,
bullet lists for findings. Always include a "Confidence & Gaps" footer
listing what you couldn't determine and what you'd ask an SME.

View File

@@ -0,0 +1,56 @@
---
name: security-auditor
description: Adversarial security reviewer — OWASP Top 10, CWE, dependency CVEs, secrets, injection. Use for security debt scanning and pre-modernization hardening.
tools: Read, Glob, Grep, Bash
---
You are an application security engineer performing an adversarial review.
Assume the code is hostile until proven otherwise. Your job is to find
vulnerabilities a real attacker would find — and explain them in terms an
engineer can fix.
## Coverage checklist
Adapt to the target stack — web items don't apply to a batch system,
terminal/screen items don't apply to a SPA. Work through what's relevant:
- **Injection** (SQL, NoSQL, OS command, LDAP, XPath, template) — trace every
user-controlled input to every sink, including dynamic SQL and shell-outs
- **Authentication / session** — hardcoded creds, weak session handling,
missing auth checks on sensitive routes/transactions/jobs
- **Sensitive data exposure** — secrets in source, weak crypto, PII in logs,
cleartext sensitive data in record layouts, flat files, or temp datasets
- **Access control** — IDOR, missing ownership checks, privilege escalation;
missing/permissive resource ACLs (RACF profiles, IAM policies, file perms);
unguarded admin functions
- **XSS / CSRF** — unescaped output, missing tokens (web targets)
- **Insecure deserialization** — untrusted data into pickle/yaml.load/
`ObjectInputStream` or custom record parsers
- **Vulnerable dependencies** — run `npm audit` / `pip-audit` /
read manifests and flag versions with known CVEs
- **SSRF / path traversal / open redirect** (web/network targets)
- **Input validation** — missing length/range/format checks at trust
boundaries (form/screen fields, API params, batch input records) before
persistence or downstream calls
- **Security misconfiguration** — debug mode, verbose errors, default creds,
hardcoded credentials in deployment scripts, job definitions, or config
## Tooling
Use available SAST where it helps (npm audit, pip-audit, grep for known-bad
patterns) but **read the code** — tools miss logic flaws. Show tool output
verbatim, then add your manual findings.
## Reporting standard
For each finding:
| Field | Content |
|---|---|
| **ID** | SEC-NNN |
| **CWE** | CWE-XXX with name |
| **Severity** | Critical / High / Medium / Low (CVSS-ish reasoning) |
| **Location** | `file:line` |
| **Exploit scenario** | One sentence: how an attacker uses this |
| **Fix** | Concrete code-level remediation |
No hand-waving. If you can't write the exploit scenario, downgrade severity.

View File

@@ -0,0 +1,36 @@
---
name: test-engineer
description: Writes characterization, contract, and equivalence tests that pin down legacy behavior so transformation can be proven correct. Use before any rewrite.
tools: Read, Write, Edit, Glob, Grep, Bash
---
You are a test engineer specializing in **characterization testing**
writing tests that capture what legacy code *actually does* (not what
someone thinks it should do) so that a rewrite can be proven equivalent.
## Principles
- **The legacy code is the oracle.** If the legacy computes 19.27 and the
spec says 19.28, the test asserts 19.27 and you flag the discrepancy
separately. We're proving equivalence first; fixing bugs is a separate
decision.
- **Concrete over abstract.** Every test has literal input values and literal
expected outputs. No "should calculate correctly" — instead "given balance
1250.00 and APR 18.5%, returns 19.27".
- **Cover the edges the legacy covers.** Read the legacy code's branches.
Every IF/EVALUATE/switch arm gets at least one test case. Boundary values
(zero, negative, max, empty) get explicit cases.
- **Tests must run against BOTH.** Structure tests so the same inputs can be
fed to the legacy implementation (or a recorded trace of it) and the modern
one. The test harness compares.
- **Executable, not aspirational.** Tests compile and run from day one.
Behaviors not yet implemented in the target are marked
`@Disabled("pending RULE-NNN")` / `@pytest.mark.skip` / `it.todo()` — never
deleted.
## Output
Idiomatic tests for the requested target stack (JUnit 5 / pytest / Vitest /
xUnit), one test class/file per legacy module, test method names that read
as specifications. Include a `README.md` in the test directory explaining
how to run them and how to add a new case.

View File

@@ -0,0 +1,161 @@
---
description: Full discovery & portfolio analysis of a legacy system — inventory, complexity, debt, effort estimation
argument-hint: <system-dir> | --portfolio <parent-dir>
---
**Mode select.** If `$ARGUMENTS` starts with `--portfolio`, run **Portfolio
mode** against the directory that follows. Otherwise run **Single-system
mode** against `legacy/$1`.
---
# Portfolio mode (`--portfolio <parent-dir>`)
Sweep every immediate subdirectory of the parent dir and produce a
heat-map a steering committee can use to sequence a multi-year program.
## Step P1 — Per-system metrics
For each subdirectory `<sys>`:
```bash
cloc --quiet --csv <parent>/<sys> # LOC by language
lizard -s cyclomatic_complexity <parent>/<sys> 2>/dev/null | tail -1
```
If `cloc`/`lizard` are not installed, fall back to `scc <parent>/<sys>`
(LOC + complexity) or `find` + `wc -l` grouped by extension, and estimate
complexity by counting decision keywords per file. Note which tool you used.
Capture: total SLOC, dominant language, file count, mean & max
cyclomatic complexity (CCN). For dependency freshness, locate the
manifest (`package.json`, `pom.xml`, `*.csproj`, `requirements*.txt`,
copybook dir) and note its age / pinned-version count.
## Step P2 — COCOMO-II effort
Compute person-months per system using COCOMO-II basic:
`PM = 2.94 × (KSLOC)^1.10` (nominal scale factors). Show the formula and
inputs so the figure is defensible, not a guess.
## Step P3 — Documentation coverage
For each system, count source files with vs without a header comment
block, and list architecture docs present (`README`, `docs/`, ADRs).
Report coverage % and the top undocumented subsystems.
## Step P4 — Render the heat-map
Write `analysis/portfolio.html` (dark `#1e1e1e` bg, `#d4d4d4` text,
`#cc785c` accent, system-ui font, all CSS inline). One row per system;
columns: **System · Lang · KSLOC · Files · Mean CCN · Max CCN · Dep
Freshness · Doc Coverage % · COCOMO PM · Risk**. Color-grade the PM and
Risk cells (green→amber→red). Below the table, a 2-3 sentence
sequencing recommendation: which system first and why.
Then stop. Tell the user to open `analysis/portfolio.html`.
---
# Single-system mode
Perform a complete **modernization assessment** of `legacy/$1`.
This is the discovery phase — the goal is a fact-grounded executive brief that
a VP of Engineering could take into a budget meeting. Work in this order:
## Step 1 — Quantitative inventory
Run and show the output of:
```bash
scc legacy/$1
```
Then run `scc --by-file -s complexity legacy/$1 | head -25` to identify the
highest-complexity files. Capture the COCOMO effort/cost estimate scc provides.
If `scc` is not installed, fall back in order:
1. `cloc legacy/$1` for the LOC table, then compute COCOMO-II effort
yourself: `PM = 2.94 × (KSLOC)^1.10` (nominal scale factors). Show the
inputs.
2. If `cloc` is also missing, use `find` + `wc -l` grouped by extension
for LOC, and rank file complexity by counting decision keywords
(`IF`/`EVALUATE`/`WHEN`/`PERFORM` for COBOL; `if`/`for`/`while`/`case`/
`catch` for C-family). Compute COCOMO from KSLOC as above.
Note in the assessment which tool was used so the figures are reproducible.
## Step 2 — Technology fingerprint
Identify, with file evidence:
- Languages, frameworks, and runtime versions in use
- Build system and dependency manifest locations
- Data stores (schemas, copybooks, DDL, ORM configs)
- Integration points (queues, APIs, batch interfaces, screen maps)
- Test presence and approximate coverage signal
## Step 3 — Parallel deep analysis
Spawn three subagents **in parallel**:
1. **legacy-analyst** — "Build a structural map of legacy/$1: what are the
5-12 major functional domains (group optional/feature-gated subsystems
under one umbrella), which source files belong to each, and how do they
depend on each other (control flow + shared data)? Return a markdown
table + a Mermaid `graph TD` of domain-level dependencies — use
`subgraph` to cluster and cap at ~40 edges. Cite repo-relative file
paths. Flag dangling references (defined but no source, or unused)."
2. **legacy-analyst** — "Identify technical debt in legacy/$1: dead code,
deprecated APIs, copy-paste duplication, god objects/programs, missing
error handling, hardcoded config. Return the top 10 findings ranked by
remediation value, each with file:line evidence."
3. **security-auditor** — "Scan legacy/$1 for security vulnerabilities:
injection, auth weaknesses, hardcoded secrets, vulnerable dependencies,
missing input validation. Return findings in CWE-tagged table form with
file:line evidence and severity."
Wait for all three. Synthesize their findings.
## Step 4 — Production runtime overlay (optional)
If production telemetry is available — an observability/APM MCP server, batch
job logs, or runtime exports the user can supply — gather p50/p95/p99
wall-clock for the system's key jobs/transactions (e.g. JCL members under
`legacy/$1/jcl/`, scheduled batches, top API routes). Use it to:
- Tag each functional domain from Step 3 with its production wall-clock
cost and **p99 variance** (p99/p50 ratio).
- Flag the highest-variance domain as the highest operational risk —
this is telemetry-grounded, not a static-analysis opinion.
Include a small **Runtime Profile** table (Job/Route · Domain · p50 · p95 ·
p99 · p99/p50) in the assessment. If no telemetry is available, skip this
step and note the gap in the assessment.
## Step 5 — Documentation gap analysis
Compare what the code *does* against what README/docs/comments *say*. List
the top 5 undocumented behaviors or subsystems that a new engineer would
need explained.
## Step 6 — Write the assessment
Create `analysis/$1/ASSESSMENT.md` with these sections:
- **Executive Summary** (3-4 sentences: what it is, how big, how risky, headline recommendation)
- **System Inventory** (the scc table + tech fingerprint)
- **Architecture-at-a-Glance** (the domain table; reference the diagram)
- **Production Runtime Profile** (the runtime table from Step 4 with the highest-variance domain called out — or "no telemetry available")
- **Technical Debt** (top 10, ranked)
- **Security Findings** (CWE table)
- **Documentation Gaps** (top 5)
- **Effort Estimation** (COCOMO-derived person-months, ±range, key cost drivers)
- **Recommended Modernization Pattern** (one of: Rehost / Replatform / Refactor / Rearchitect / Rebuild / Replace — with one-paragraph rationale)
Also create `analysis/$1/ARCHITECTURE.mmd` containing the Mermaid domain
dependency diagram from the legacy-analyst.
## Step 7 — Present
Tell the user the assessment is ready and suggest:
`glow -p analysis/$1/ASSESSMENT.md`

View File

@@ -0,0 +1,65 @@
---
description: Generate a phased Modernization Brief — the approved plan that transformation agents will execute against
argument-hint: <system-dir> [target-stack]
---
Synthesize everything in `analysis/$1/` into a **Modernization Brief** — the
single document a steering committee approves and engineering executes.
Target stack: `$2` (if blank, recommend one based on the assessment findings).
Read `analysis/$1/ASSESSMENT.md`, `analysis/$1/TOPOLOGY.html` (and the `.mmd`
files alongside it), and `analysis/$1/BUSINESS_RULES.md` first. If any are
missing, say so and stop — they come from `/modernize-assess`, `/modernize-map`,
and `/modernize-extract-rules` respectively. Run those first.
## The Brief
Write `analysis/$1/MODERNIZATION_BRIEF.md`:
### 1. Objective
One paragraph: from what, to what, why now.
### 2. Target Architecture
Mermaid C4 Container diagram of the *end state*. Name every service, data
store, and integration. Below it, a table mapping legacy component → target
component(s).
### 3. Phased Sequence
Break the work into 3-6 phases using **strangler-fig ordering** — lowest-risk,
fewest-dependencies first. For each phase:
- Scope (which legacy modules, which target services)
- Entry criteria (what must be true to start)
- Exit criteria (what tests/metrics prove it's done)
- Estimated effort (person-weeks, derived from COCOMO + complexity data)
- Risk level + top 2 risks + mitigation
Render the phases as a Mermaid `gantt` chart.
### 4. Behavior Contract
List the **P0 rules** from BUSINESS_RULES.md (the ones tagged `Priority: P0`
money, regulatory, data integrity) that MUST be proven equivalent before any
phase ships. These become the regression suite. Flag any P0 rule with
Confidence < High as a blocker requiring SME confirmation before its phase
starts.
### 5. Validation Strategy
State which combination applies: characterization tests, contract tests,
parallel-run / dual-execution diff, property-based tests, manual UAT.
Justify per phase.
### 6. Open Questions
Anything requiring human/SME decision before Phase 1 starts. Each as a
checkbox the approver must tick.
### 7. Approval Block
```
Approved by: ________________ Date: __________
Approval covers: Phase 1 only | Full plan
```
## Present
Enter **plan mode** and present a summary of the brief. Do NOT proceed to any
transformation until the user explicitly approves. This gate is the
human-in-the-loop control point.

View File

@@ -0,0 +1,76 @@
---
description: Mine business logic from legacy code into testable, human-readable rule specifications
argument-hint: <system-dir> [module-pattern]
---
Extract the **business rules** embedded in `legacy/$1` into a structured,
testable specification — the institutional knowledge that's currently locked
in code and in the heads of engineers who are about to retire.
Scope: if a module pattern was given (`$2`), focus there; otherwise cover the
entire system. Either way, prioritize calculation, validation, eligibility,
and state-transition logic over plumbing.
## Method
Spawn **three business-rules-extractor subagents in parallel**, each assigned
a different lens. If `$2` is non-empty, include "focusing on files matching
$2" in each prompt.
1. **Calculations** — "Find every formula, rate, threshold, and computed value
in legacy/$1. For each: what does it compute, what are the inputs, what is
the exact formula/algorithm, where is it implemented (file:line), and what
edge cases does the code handle?"
2. **Validations & eligibility** — "Find every business validation, eligibility
check, and guard condition in legacy/$1. For each: what is being checked,
what happens on pass/fail, where is it (file:line)?"
3. **State & lifecycle** — "Find every status field, state machine, and
lifecycle transition in legacy/$1. For each entity: what states exist,
what triggers transitions, what side-effects fire?"
## Synthesize
Merge the three result sets. Deduplicate. For each distinct rule, write a
**Rule Card** in this exact format:
```
### RULE-NNN: <plain-English name>
**Category:** Calculation | Validation | Lifecycle | Policy
**Priority:** P0 | P1 | P2
**Source:** `path/to/file.ext:line-line`
**Plain English:** One sentence a business analyst would recognize.
**Specification:**
Given <precondition>
When <trigger>
Then <outcome>
[And <additional outcome>]
**Parameters:** <constants, rates, thresholds with their current values>
**Edge cases handled:** <list>
**Suspected defect:** <optional — legacy behavior that looks wrong; decide preserve-vs-fix during transform>
**Confidence:** High | Medium | Low — <why; if < High, state the exact SME question>
```
Priority heuristic — default to **P1**. Assign **P0** if the rule moves money,
enforces a regulatory/compliance requirement, or guards data integrity (and
flag P0 rules at <High confidence as SME-required). Assign **P2** for
display/formatting/convenience rules. The downstream `/modernize-brief`
behavior contract is built from the P0 rules, so assign deliberately.
Write all rule cards to `analysis/$1/BUSINESS_RULES.md` with:
- A summary table at top (ID, name, category, priority, source, confidence)
- Rule cards grouped by category
- A final **"Rules requiring SME confirmation"** section listing every
Medium/Low confidence rule with the specific question a human needs to answer
## Generate the DTO catalog
As a companion, create `analysis/$1/DATA_OBJECTS.md` cataloging the core
data transfer objects / records / entities: name, fields with types, which
rules consume/produce them, source location.
## Present
Report: total rules found, breakdown by category, count needing SME review.
Suggest: `glow -p analysis/$1/BUSINESS_RULES.md`

View File

@@ -0,0 +1,64 @@
---
description: Security vulnerability scan with a reviewable remediation patch — OWASP, CWE, CVE, secrets, injection
argument-hint: <system-dir>
---
Run a **security hardening pass** on `legacy/$1`: find vulnerabilities, rank
them, and produce a reviewable patch for the critical ones.
This command never edits `legacy/` — it writes findings and a proposed patch
to `analysis/$1/`. The user reviews and applies (or not).
## Scan
Spawn the **security-auditor** subagent:
"Adversarially audit legacy/$1 for security vulnerabilities. Cover what's
relevant to the stack: injection (SQL/NoSQL/OS command/template), broken
auth, sensitive data exposure, access control gaps, insecure deserialization,
hardcoded secrets, vulnerable dependency versions, missing input validation,
path traversal. For each finding return: CWE ID, severity
(Critical/High/Med/Low), file:line, one-sentence exploit scenario, and
recommended fix. Run any available SAST tooling (npm audit, pip-audit,
OWASP dependency-check) and include its raw output."
## Triage
Write `analysis/$1/SECURITY_FINDINGS.md`:
- Summary scorecard (count by severity, top CWE categories)
- Findings table sorted by severity
- Dependency CVE table (package, installed version, CVE, fixed version)
## Remediate
For each **Critical** and **High** finding, draft a minimal, targeted fix.
Do **not** edit `legacy/` — write all fixes as a single unified diff to
`analysis/$1/security_remediation.patch`, with a comment line above each
hunk citing the finding ID it addresses (`# SEC-001: parameterize the query`).
Add a **Remediation Log** section to SECURITY_FINDINGS.md mapping each
finding ID → one-line summary of the proposed fix and the patch hunk that
implements it.
## Verify
Spawn the **security-auditor** again to **review the patch** against the
original code:
"Review analysis/$1/security_remediation.patch against legacy/$1. For each
hunk: does it fully remediate the cited finding? Does it introduce new
vulnerabilities or change behavior beyond the fix? Return one verdict per
hunk: RESOLVES / PARTIAL / INTRODUCES-RISK, with a one-line reason."
Add a **Patch Review** section to SECURITY_FINDINGS.md with the verdicts.
If any hunk is PARTIAL or INTRODUCES-RISK, revise the patch and re-review.
## Present
Tell the user the artifacts are ready:
- `analysis/$1/SECURITY_FINDINGS.md` — findings, remediation log, patch review
- `analysis/$1/security_remediation.patch` — review, then apply if appropriate
with `git -C legacy/$1 apply ../../analysis/$1/security_remediation.patch`
- Re-run `/modernize-harden $1` after applying to confirm resolution
Suggest: `glow -p analysis/$1/SECURITY_FINDINGS.md`

View File

@@ -0,0 +1,104 @@
---
description: Dependency & topology mapping — call graphs, data lineage, batch flows, rendered as navigable diagrams
argument-hint: <system-dir>
---
Build a **dependency and topology map** of `legacy/$1` and render it visually.
The assessment gave us domains. Now go one level deeper: how do the *pieces*
connect? This is the map an engineer needs before touching anything.
## What to produce
Write a one-off analysis script (Python or shell — your choice) that parses
the source under `legacy/$1` and extracts the four datasets below. Three
principles apply across stacks; getting them wrong produces a misleading map:
1. **Edges live in two places** — direct calls in source, *and* dispatcher/
router calls whose targets are variables (config tables, route maps,
dependency injection, dynamic dispatch). Resolve variables against config
before declaring an edge unresolvable.
2. **The code↔storage join is usually external configuration**, not source —
job/deployment descriptors map logical names to physical stores.
3. **Entry points usually live in deployment config**, not source — without
parsing it, every top-level module looks unreachable.
Extract:
- **Program/module call graph** — direct calls (`CALL`, method invocations,
`import`/`require`) *and* dispatcher calls (`EXEC CICS LINK/XCTL`, DI
container wiring, framework routing, reflection/factory). Resolve variable
call targets against route tables, copybooks, config, or constant pools.
- **Data dependency graph** — which modules read/write which data stores,
joined through the relevant config: `SELECT…ASSIGN TO` ↔ JCL `DD` (batch
COBOL), `EXEC CICS READ/WRITE…FILE()` ↔ CSD `DEFINE FILE` (CICS online),
`EXEC SQL` table refs (embedded SQL), ORM annotations/mappings (Java/.NET),
model files (Node/Python/Ruby). Include UI/screen bindings (BMS maps, JSPs,
templates) — they're dependencies too.
- **Entry points** — whatever the stack's outermost invoker is, read from
where it's defined: JCL `EXEC PGM=` and CICS CSD `DEFINE TRANSACTION`
(mainframe), `web.xml`/route annotations/route files (web), `main()`/argv
parsing (CLI), queue/scheduler subscriptions (event-driven).
- **Dead-end candidates** — modules with no inbound edges. **Only meaningful
once all the entry-point and call-edge types above are in the graph.**
Suppress the dead claim for anything that could be the target of an
unresolved dynamic call. A grep-only graph will mark most dispatcher-driven
modules (CICS programs, Spring controllers, ORM-bound DAOs) dead when they
aren't.
If the source is fixed-column (COBOL columns 872, RPG, etc.), slice the
code area and strip comment lines before regex matching, or you'll match
sequence numbers and commented-out code.
Save the script as `analysis/$1/extract_topology.py` (or `.sh`) so it can be
re-run and audited. Have it write a machine-readable
`analysis/$1/topology.json` and print a human summary. Run it; show the
summary (cap at ~200 lines for very large estates).
## Render
From the extracted data, generate **three Mermaid diagrams** and write them
to `analysis/$1/TOPOLOGY.html` as a self-contained page that renders in any
browser.
The HTML page must use: dark `#1e1e1e` background, `#d4d4d4` text,
`#cc785c` for `<h2>`/accents, `system-ui` font, all CSS **inline** (no
external stylesheets). Load Mermaid from a CDN in `<head>`:
```html
<script type="module">
import mermaid from 'https://cdn.jsdelivr.net/npm/mermaid@11/dist/mermaid.esm.min.mjs';
mermaid.initialize({ startOnLoad: true, theme: 'dark' });
</script>
```
Each diagram goes in a `<pre class="mermaid">...</pre>` block. Do **not**
wrap diagrams in markdown ` ``` ` fences inside the HTML.
1. **`graph TD` — Module call graph.** Cluster by domain (use `subgraph`).
Highlight entry points in a distinct style. Cap at ~40 nodes — if larger,
show domain-level with one expanded domain.
2. **`graph LR` — Data lineage.** Programs → data stores.
Mark read vs write edges.
3. **`flowchart TD` — Critical path.** Trace ONE end-to-end business flow
(e.g., "monthly billing run" or "process payment") through every program
and data store it touches, in execution order. If production telemetry is
available (see `/modernize-assess` Step 4), annotate each step with its
p50/p99 wall-clock.
Also export the three diagrams as standalone `.mmd` files for re-use:
`analysis/$1/call-graph.mmd`, `analysis/$1/data-lineage.mmd`,
`analysis/$1/critical-path.mmd`.
## Annotate
Below each `<pre class="mermaid">` block in TOPOLOGY.html, add a `<ul>`
with 3-5 **architect observations**: tight coupling clusters, single
points of failure, candidates for service extraction, data stores
touched by too many writers.
## Present
Tell the user to open `analysis/$1/TOPOLOGY.html` in a browser.

View File

@@ -0,0 +1,83 @@
---
description: Multi-agent greenfield rebuild — extract specs from legacy, design AI-native, scaffold & validate with HITL
argument-hint: <system-dir> <target-vision>
---
**Reimagine** `legacy/$1` as: $2
This is not a port — it's a rebuild from extracted intent. The legacy system
becomes the *specification source*, not the structural template. This command
orchestrates a multi-agent team with explicit human checkpoints.
## Phase A — Specification mining (parallel agents)
Spawn concurrently and show the user that all three are running:
1. **business-rules-extractor** — "Extract every business rule from legacy/$1
into Given/When/Then form. Output to a structured list I can parse."
2. **legacy-analyst** — "Catalog every external interface of legacy/$1:
inbound (screens, APIs, batch triggers, queues) and outbound (reports,
files, downstream calls, DB writes). For each: name, direction, payload
shape, frequency/SLA if discernible."
3. **legacy-analyst** — "Identify the core domain entities in legacy/$1 and
their relationships. Return as an entity list + Mermaid erDiagram."
Collect results. Write `analysis/$1/AI_NATIVE_SPEC.md` containing:
- **Capabilities** (what the system must do — derived from rules + interfaces)
- **Domain Model** (entities + erDiagram)
- **Interface Contracts** (each external interface as an OpenAPI fragment or
AsyncAPI fragment)
- **Non-functional requirements** inferred from legacy (batch windows, volumes)
- **Behavior Contract** (the Given/When/Then rules — these are the acceptance tests)
## Phase B — HITL checkpoint #1
Present the spec summary. Ask the user **one focused question**: "Which of
these capabilities are P0 for the reimagined system, and are there any we
should deliberately drop?" Wait for the answer. Record it in the spec.
## Phase C — Architecture (single agent, then critique)
Design the target architecture for "$2":
- Mermaid C4 Container diagram
- Service boundaries with rationale (which rules/entities live where)
- Technology choices with one-line justification each
- Data migration approach from legacy stores
Then spawn **architecture-critic**: "Review this proposed architecture for
$2 against the spec in analysis/$1/AI_NATIVE_SPEC.md. Identify over-engineering,
missed requirements, scaling risks, and simpler alternatives." Incorporate
the critique. Write the result to `analysis/$1/REIMAGINED_ARCHITECTURE.md`.
## Phase D — HITL checkpoint #2
Enter plan mode. Present the architecture. Wait for approval.
## Phase E — Parallel scaffolding
For each service in the approved architecture (cap at 3 to keep the run
tractable; tell the user which you deferred), spawn a **general-purpose agent
in parallel**:
"Scaffold the <service-name> service per analysis/$1/REIMAGINED_ARCHITECTURE.md
and AI_NATIVE_SPEC.md. Create: project skeleton, domain model, API stubs
matching the interface contracts, and **executable acceptance tests** for every
behavior-contract rule assigned to this service (mark unimplemented ones as
expected-failure/skip with the rule ID). Write to modernized/$1-reimagined/<service-name>/."
Show the agents' progress. When all complete, run the acceptance test suites
and report: total tests, passing (scaffolded behavior), pending (rule IDs
awaiting implementation).
## Phase F — Knowledge graph handoff
Write `modernized/$1-reimagined/CLAUDE.md` — the persistent context file for
the new system, containing: architecture summary, service responsibilities,
where the spec lives, how to run tests, and the legacy→modern traceability
map. This file IS the knowledge graph that future agents and engineers will
load.
Report: services scaffolded, acceptance tests defined, % behaviors with a
home, location of all artifacts.

View File

@@ -0,0 +1,78 @@
---
description: Transform one legacy module to the target stack — idiomatic rewrite with behavior-equivalence tests
argument-hint: <system-dir> <module> <target-stack>
---
Transform `legacy/$1` module **`$2`** into **$3**, with proof of behavioral
equivalence.
This is a surgical, single-module transformation — one vertical slice of the
strangler fig. Output goes to `modernized/$1/$2/`.
## Step 0 — Plan (HITL gate)
Read the source module and any business rules in `analysis/$1/BUSINESS_RULES.md`
that reference it. Then **enter plan mode** and present:
- Which source files are in scope
- The target module structure (packages/classes/files you'll create)
- Which business rules / behaviors this module implements
- How you'll prove equivalence (test strategy)
- Anything ambiguous that needs a human decision NOW
Wait for approval before writing any code.
## Step 1 — Characterization tests FIRST
Before writing target code, spawn the **test-engineer** subagent:
"Write characterization tests for legacy/$1 module $2. Read the source,
identify every observable behavior, and encode each as a test case with
concrete input → expected output pairs derived from the legacy logic.
Target framework: <appropriate for $3>. Write to
`modernized/$1/$2/src/test/`. These tests define 'done' — the new code
must pass all of them."
Show the user the test file. Get a 👍 before proceeding.
## Step 2 — Idiomatic transformation
Write the target implementation in `modernized/$1/$2/src/main/`.
**Critical:** Write code a senior $3 engineer would write from the
*specification*, not from the legacy structure. Do NOT mirror COBOL paragraphs
as methods, do NOT preserve legacy variable names like `WS-TEMP-AMT-X`.
Use the target language's idioms: records/dataclasses, streams, dependency
injection, proper error types, etc.
Include: domain model, service logic, API surface (REST controller or
equivalent), and configuration. Add concise Javadoc/docstrings linking each
class back to the rule IDs it implements.
## Step 3 — Prove it
Run the characterization tests:
```bash
cd modernized/$1/$2 && <appropriate test command for $3>
```
Show the output. If anything fails, fix and re-run until green.
## Step 4 — Side-by-side review
Generate `modernized/$1/$2/TRANSFORMATION_NOTES.md`:
- Mapping table: legacy file:lines → target file:lines, per behavior
- Deliberate deviations from legacy behavior (with rationale)
- What was NOT migrated (dead code, unreachable branches) and why
- Follow-ups for the next module that depends on this one
Then show a visual diff of one representative behavior, legacy vs modern:
```bash
delta --side-by-side <(sed -n '<lines>p' legacy/$1/<file>) modernized/$1/$2/src/main/<file>
```
## Step 5 — Architecture review
Spawn the **architecture-critic** subagent to review the transformed code
against $3 best practices. Apply any HIGH-severity feedback; list the rest
in TRANSFORMATION_NOTES.md.
Report: tests passing, lines of legacy retired, location of artifacts.