mirror of
https://github.com/anthropics/claude-plugins-official.git
synced 2026-05-03 09:22:41 +00:00
Add code-modernization plugin
Structured workflow (assess → map → extract-rules → reimagine → transform → harden) and specialist agents (legacy-analyst, business-rules-extractor, architecture-critic, security-auditor, test-engineer) for modernizing legacy codebases into current stacks.
This commit is contained in:
142
plugins/code-modernization/commands/modernize-assess.md
Normal file
142
plugins/code-modernization/commands/modernize-assess.md
Normal file
@@ -0,0 +1,142 @@
|
||||
---
|
||||
description: Full discovery & portfolio analysis of a legacy system — inventory, complexity, debt, effort estimation
|
||||
argument-hint: <system-dir> | --portfolio <parent-dir>
|
||||
---
|
||||
|
||||
**Mode select.** If `$ARGUMENTS` starts with `--portfolio`, run **Portfolio
|
||||
mode** against the directory that follows. Otherwise run **Single-system
|
||||
mode** against `legacy/$1`.
|
||||
|
||||
---
|
||||
|
||||
# Portfolio mode (`--portfolio <parent-dir>`)
|
||||
|
||||
Sweep every immediate subdirectory of the parent dir and produce a
|
||||
heat-map a steering committee can use to sequence a multi-year program.
|
||||
|
||||
## Step P1 — Per-system metrics
|
||||
|
||||
For each subdirectory `<sys>`:
|
||||
|
||||
```bash
|
||||
cloc --quiet --csv <parent>/<sys> # LOC by language
|
||||
lizard -s cyclomatic_complexity <parent>/<sys> 2>/dev/null | tail -1
|
||||
```
|
||||
|
||||
Capture: total SLOC, dominant language, file count, mean & max
|
||||
cyclomatic complexity (CCN). For dependency freshness, locate the
|
||||
manifest (`package.json`, `pom.xml`, `*.csproj`, `requirements*.txt`,
|
||||
copybook dir) and note its age / pinned-version count.
|
||||
|
||||
## Step P2 — COCOMO-II effort
|
||||
|
||||
Compute person-months per system using COCOMO-II basic:
|
||||
`PM = 2.94 × (KSLOC)^1.10` (nominal scale factors). Show the formula and
|
||||
inputs so the figure is defensible, not a guess.
|
||||
|
||||
## Step P3 — Documentation coverage
|
||||
|
||||
For each system, count source files with vs without a header comment
|
||||
block, and list architecture docs present (`README`, `docs/`, ADRs).
|
||||
Report coverage % and the top undocumented subsystems.
|
||||
|
||||
## Step P4 — Render the heat-map
|
||||
|
||||
Write `analysis/portfolio.html` (dark `#1e1e1e` bg, `#d4d4d4` text,
|
||||
`#cc785c` accent, system-ui font, all CSS inline). One row per system;
|
||||
columns: **System · Lang · KSLOC · Files · Mean CCN · Max CCN · Dep
|
||||
Freshness · Doc Coverage % · COCOMO PM · Risk**. Color-grade the PM and
|
||||
Risk cells (green→amber→red). Below the table, a 2-3 sentence
|
||||
sequencing recommendation: which system first and why.
|
||||
|
||||
Then stop. Tell the user to open `analysis/portfolio.html`.
|
||||
|
||||
---
|
||||
|
||||
# Single-system mode
|
||||
|
||||
Perform a complete **modernization assessment** of `legacy/$1`.
|
||||
|
||||
This is the discovery phase — the goal is a fact-grounded executive brief that
|
||||
a VP of Engineering could take into a budget meeting. Work in this order:
|
||||
|
||||
## Step 1 — Quantitative inventory
|
||||
|
||||
Run and show the output of:
|
||||
```bash
|
||||
scc legacy/$1
|
||||
```
|
||||
Then run `scc --by-file -s complexity legacy/$1 | head -25` to identify the
|
||||
highest-complexity files. Capture the COCOMO effort/cost estimate scc provides.
|
||||
|
||||
## Step 2 — Technology fingerprint
|
||||
|
||||
Identify, with file evidence:
|
||||
- Languages, frameworks, and runtime versions in use
|
||||
- Build system and dependency manifest locations
|
||||
- Data stores (schemas, copybooks, DDL, ORM configs)
|
||||
- Integration points (queues, APIs, batch interfaces, screen maps)
|
||||
- Test presence and approximate coverage signal
|
||||
|
||||
## Step 3 — Parallel deep analysis
|
||||
|
||||
Spawn three subagents **concurrently** using the Task tool:
|
||||
|
||||
1. **legacy-analyst** — "Build a structural map of legacy/$1: what are the
|
||||
5-10 major functional domains, which source files belong to each, and how
|
||||
do they depend on each other? Return a markdown table + a Mermaid
|
||||
`graph TD` of domain-level dependencies. Cite file paths."
|
||||
|
||||
2. **legacy-analyst** — "Identify technical debt in legacy/$1: dead code,
|
||||
deprecated APIs, copy-paste duplication, god objects/programs, missing
|
||||
error handling, hardcoded config. Return the top 10 findings ranked by
|
||||
remediation value, each with file:line evidence."
|
||||
|
||||
3. **security-auditor** — "Scan legacy/$1 for security vulnerabilities:
|
||||
injection, auth weaknesses, hardcoded secrets, vulnerable dependencies,
|
||||
missing input validation. Return findings in CWE-tagged table form with
|
||||
file:line evidence and severity."
|
||||
|
||||
Wait for all three. Synthesize their findings.
|
||||
|
||||
## Step 4 — Production runtime overlay (observability)
|
||||
|
||||
If the system has batch jobs (e.g. JCL members under `app/jcl/`), call the
|
||||
`observability` MCP tool `get_batch_runtimes` for each business-relevant
|
||||
job name (interest, posting, statement, reporting). Use the returned
|
||||
p50/p95/p99 and 90-day series to:
|
||||
|
||||
- Tag each functional domain from Step 3 with its production wall-clock
|
||||
cost and **p99 variance** (p99/p50 ratio).
|
||||
- Flag the highest-variance domain as the highest operational risk —
|
||||
this is telemetry-grounded, not a static-analysis opinion.
|
||||
|
||||
Include a small **Batch Runtime** table (Job · Domain · p50 · p95 · p99 ·
|
||||
p99/p50) in the assessment.
|
||||
|
||||
## Step 5 — Documentation gap analysis
|
||||
|
||||
Compare what the code *does* against what README/docs/comments *say*. List
|
||||
the top 5 undocumented behaviors or subsystems that a new engineer would
|
||||
need explained.
|
||||
|
||||
## Step 6 — Write the assessment
|
||||
|
||||
Create `analysis/$1/ASSESSMENT.md` with these sections:
|
||||
- **Executive Summary** (3-4 sentences: what it is, how big, how risky, headline recommendation)
|
||||
- **System Inventory** (the scc table + tech fingerprint)
|
||||
- **Architecture-at-a-Glance** (the domain table; reference the diagram)
|
||||
- **Production Runtime Profile** (the batch-runtime table from Step 4, with the highest-variance domain called out)
|
||||
- **Technical Debt** (top 10, ranked)
|
||||
- **Security Findings** (CWE table)
|
||||
- **Documentation Gaps** (top 5)
|
||||
- **Effort Estimation** (COCOMO-derived person-months, ±range, key cost drivers)
|
||||
- **Recommended Modernization Pattern** (one of: Rehost / Replatform / Refactor / Rearchitect / Rebuild / Replace — with one-paragraph rationale)
|
||||
|
||||
Also create `analysis/$1/ARCHITECTURE.mmd` containing the Mermaid domain
|
||||
dependency diagram from the legacy-analyst.
|
||||
|
||||
## Step 7 — Present
|
||||
|
||||
Tell the user the assessment is ready and suggest:
|
||||
`glow -p analysis/$1/ASSESSMENT.md`
|
||||
60
plugins/code-modernization/commands/modernize-brief.md
Normal file
60
plugins/code-modernization/commands/modernize-brief.md
Normal file
@@ -0,0 +1,60 @@
|
||||
---
|
||||
description: Generate a phased Modernization Brief — the approved plan that transformation agents will execute against
|
||||
argument-hint: <system-dir> [target-stack]
|
||||
---
|
||||
|
||||
Synthesize everything in `analysis/$1/` into a **Modernization Brief** — the
|
||||
single document a steering committee approves and engineering executes.
|
||||
|
||||
Target stack: `$2` (if blank, recommend one based on the assessment findings).
|
||||
|
||||
Read `analysis/$1/ASSESSMENT.md`, `TOPOLOGY.md`, and `BUSINESS_RULES.md` first.
|
||||
If any are missing, say so and stop.
|
||||
|
||||
## The Brief
|
||||
|
||||
Write `analysis/$1/MODERNIZATION_BRIEF.md`:
|
||||
|
||||
### 1. Objective
|
||||
One paragraph: from what, to what, why now.
|
||||
|
||||
### 2. Target Architecture
|
||||
Mermaid C4 Container diagram of the *end state*. Name every service, data
|
||||
store, and integration. Below it, a table mapping legacy component → target
|
||||
component(s).
|
||||
|
||||
### 3. Phased Sequence
|
||||
Break the work into 3-6 phases using **strangler-fig ordering** — lowest-risk,
|
||||
fewest-dependencies first. For each phase:
|
||||
- Scope (which legacy modules, which target services)
|
||||
- Entry criteria (what must be true to start)
|
||||
- Exit criteria (what tests/metrics prove it's done)
|
||||
- Estimated effort (person-weeks, derived from COCOMO + complexity data)
|
||||
- Risk level + top 2 risks + mitigation
|
||||
|
||||
Render the phases as a Mermaid `gantt` chart.
|
||||
|
||||
### 4. Behavior Contract
|
||||
List the **P0 behaviors** from BUSINESS_RULES.md that MUST be proven
|
||||
equivalent before any phase ships. These become the regression suite.
|
||||
|
||||
### 5. Validation Strategy
|
||||
State which combination applies: characterization tests, contract tests,
|
||||
parallel-run / dual-execution diff, property-based tests, manual UAT.
|
||||
Justify per phase.
|
||||
|
||||
### 6. Open Questions
|
||||
Anything requiring human/SME decision before Phase 1 starts. Each as a
|
||||
checkbox the approver must tick.
|
||||
|
||||
### 7. Approval Block
|
||||
```
|
||||
Approved by: ________________ Date: __________
|
||||
Approval covers: Phase 1 only | Full plan
|
||||
```
|
||||
|
||||
## Present
|
||||
|
||||
Enter **plan mode** and present a summary of the brief. Do NOT proceed to any
|
||||
transformation until the user explicitly approves. This gate is the
|
||||
human-in-the-loop control point.
|
||||
@@ -0,0 +1,68 @@
|
||||
---
|
||||
description: Mine business logic from legacy code into testable, human-readable rule specifications
|
||||
argument-hint: <system-dir> [module-pattern]
|
||||
---
|
||||
|
||||
Extract the **business rules** embedded in `legacy/$1` into a structured,
|
||||
testable specification — the institutional knowledge that's currently locked
|
||||
in code and in the heads of engineers who are about to retire.
|
||||
|
||||
Scope: if a module pattern was given (`$2`), focus there; otherwise cover the
|
||||
entire system. Either way, prioritize calculation, validation, eligibility,
|
||||
and state-transition logic over plumbing.
|
||||
|
||||
## Method
|
||||
|
||||
Spawn **three business-rules-extractor subagents in parallel**, each assigned
|
||||
a different lens. If `$2` is non-empty, include "focusing on files matching
|
||||
$2" in each prompt.
|
||||
|
||||
1. **Calculations** — "Find every formula, rate, threshold, and computed value
|
||||
in legacy/$1. For each: what does it compute, what are the inputs, what is
|
||||
the exact formula/algorithm, where is it implemented (file:line), and what
|
||||
edge cases does the code handle?"
|
||||
|
||||
2. **Validations & eligibility** — "Find every business validation, eligibility
|
||||
check, and guard condition in legacy/$1. For each: what is being checked,
|
||||
what happens on pass/fail, where is it (file:line)?"
|
||||
|
||||
3. **State & lifecycle** — "Find every status field, state machine, and
|
||||
lifecycle transition in legacy/$1. For each entity: what states exist,
|
||||
what triggers transitions, what side-effects fire?"
|
||||
|
||||
## Synthesize
|
||||
|
||||
Merge the three result sets. Deduplicate. For each distinct rule, write a
|
||||
**Rule Card** in this exact format:
|
||||
|
||||
```
|
||||
### RULE-NNN: <plain-English name>
|
||||
**Category:** Calculation | Validation | Lifecycle | Policy
|
||||
**Source:** `path/to/file.ext:line-line`
|
||||
**Plain English:** One sentence a business analyst would recognize.
|
||||
**Specification:**
|
||||
Given <precondition>
|
||||
When <trigger>
|
||||
Then <outcome>
|
||||
[And <additional outcome>]
|
||||
**Parameters:** <constants, rates, thresholds with their current values>
|
||||
**Edge cases handled:** <list>
|
||||
**Confidence:** High | Medium | Low — <why>
|
||||
```
|
||||
|
||||
Write all rule cards to `analysis/$1/BUSINESS_RULES.md` with:
|
||||
- A summary table at top (ID, name, category, source, confidence)
|
||||
- Rule cards grouped by category
|
||||
- A final **"Rules requiring SME confirmation"** section listing every
|
||||
Medium/Low confidence rule with the specific question a human needs to answer
|
||||
|
||||
## Generate the DTO catalog
|
||||
|
||||
As a companion, create `analysis/$1/DATA_OBJECTS.md` cataloging the core
|
||||
data transfer objects / records / entities: name, fields with types, which
|
||||
rules consume/produce them, source location.
|
||||
|
||||
## Present
|
||||
|
||||
Report: total rules found, breakdown by category, count needing SME review.
|
||||
Suggest: `glow -p analysis/$1/BUSINESS_RULES.md`
|
||||
46
plugins/code-modernization/commands/modernize-harden.md
Normal file
46
plugins/code-modernization/commands/modernize-harden.md
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
description: Security vulnerability scan + remediation — OWASP, CVE, secrets, injection
|
||||
argument-hint: <system-dir>
|
||||
---
|
||||
|
||||
Run a **security hardening pass** on `legacy/$1`: find vulnerabilities, rank
|
||||
them, and fix the critical ones.
|
||||
|
||||
## Scan
|
||||
|
||||
Spawn the **security-auditor** subagent:
|
||||
|
||||
"Adversarially audit legacy/$1 for security vulnerabilities. Cover:
|
||||
OWASP Top 10 (injection, broken auth, XSS, SSRF, etc.), hardcoded secrets,
|
||||
vulnerable dependency versions (check package manifests against known CVEs),
|
||||
missing input validation, insecure deserialization, path traversal.
|
||||
For each finding return: CWE ID, severity (Critical/High/Med/Low), file:line,
|
||||
one-sentence exploit scenario, and recommended fix. Also run any available
|
||||
SAST tooling (npm audit, pip-audit, OWASP dependency-check) and include
|
||||
its raw output."
|
||||
|
||||
## Triage
|
||||
|
||||
Write `analysis/$1/SECURITY_FINDINGS.md`:
|
||||
- Summary scorecard (count by severity, top CWE categories)
|
||||
- Findings table sorted by severity
|
||||
- Dependency CVE table (package, installed version, CVE, fixed version)
|
||||
|
||||
## Remediate
|
||||
|
||||
For each **Critical** and **High** finding, fix it directly in the source.
|
||||
Make minimal, targeted changes. After each fix, add a one-line entry under
|
||||
"Remediation Log" in SECURITY_FINDINGS.md: finding ID → commit-style summary
|
||||
of what changed.
|
||||
|
||||
Show the cumulative diff:
|
||||
```bash
|
||||
git -C legacy/$1 diff
|
||||
```
|
||||
|
||||
## Verify
|
||||
|
||||
Re-run the security-auditor against the patched code to confirm the
|
||||
Critical/High findings are resolved. Update the scorecard with before/after.
|
||||
|
||||
Suggest: `glow -p analysis/$1/SECURITY_FINDINGS.md`
|
||||
66
plugins/code-modernization/commands/modernize-map.md
Normal file
66
plugins/code-modernization/commands/modernize-map.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
description: Dependency & topology mapping — call graphs, data lineage, batch flows, rendered as navigable diagrams
|
||||
argument-hint: <system-dir>
|
||||
---
|
||||
|
||||
Build a **dependency and topology map** of `legacy/$1` and render it visually.
|
||||
|
||||
The assessment gave us domains. Now go one level deeper: how do the *pieces*
|
||||
connect? This is the map an engineer needs before touching anything.
|
||||
|
||||
## What to produce
|
||||
|
||||
Write a one-off analysis script (Python or shell — your choice) that parses
|
||||
the source under `legacy/$1` and extracts:
|
||||
|
||||
- **Program/module call graph** — who calls whom (for COBOL: `CALL` statements
|
||||
and CICS `LINK`/`XCTL`; for Java: class-level imports/invocations; for Node:
|
||||
`require`/`import`)
|
||||
- **Data dependency graph** — which programs read/write which data stores
|
||||
(COBOL: copybooks + VSAM/DB2 in JCL DD statements; Java: JPA entities/tables;
|
||||
Node: model files)
|
||||
- **Entry points** — batch jobs, transaction IDs, HTTP routes, CLI commands
|
||||
- **Dead-end candidates** — modules with no inbound edges (potential dead code)
|
||||
|
||||
Save the script as `analysis/$1/extract_topology.py` (or `.sh`) so it can be
|
||||
re-run and audited. Run it. Show the raw output.
|
||||
|
||||
## Render
|
||||
|
||||
From the extracted data, generate **three Mermaid diagrams** and write them
|
||||
to `analysis/$1/TOPOLOGY.html` so the artifact pane renders them live.
|
||||
|
||||
The HTML page must use: dark `#1e1e1e` background, `#d4d4d4` text,
|
||||
`#cc785c` for `<h2>`/accents, `system-ui` font, all CSS **inline** (no
|
||||
external stylesheets). Each diagram goes in a
|
||||
`<pre class="mermaid">...</pre>` block — the artifact server loads
|
||||
mermaid.js and renders client-side. Do **not** wrap diagrams in
|
||||
markdown ` ``` ` fences inside the HTML.
|
||||
|
||||
1. **`graph TD` — Module call graph.** Cluster by domain (use `subgraph`).
|
||||
Highlight entry points in a distinct style. Cap at ~40 nodes — if larger,
|
||||
show domain-level with one expanded domain.
|
||||
|
||||
2. **`graph LR` — Data lineage.** Programs → data stores.
|
||||
Mark read vs write edges.
|
||||
|
||||
3. **`flowchart TD` — Critical path.** Trace ONE end-to-end business flow
|
||||
(e.g., "monthly billing run" or "process payment") through every program
|
||||
and data store it touches, in execution order. If the `observability`
|
||||
MCP server is connected, annotate each batch step with its p50/p99
|
||||
wall-clock from `get_batch_runtimes`.
|
||||
|
||||
Also export the three diagrams as standalone `.mmd` files for re-use:
|
||||
`analysis/$1/call-graph.mmd`, `analysis/$1/data-lineage.mmd`,
|
||||
`analysis/$1/critical-path.mmd`.
|
||||
|
||||
## Annotate
|
||||
|
||||
Below each `<pre class="mermaid">` block in TOPOLOGY.html, add a `<ul>`
|
||||
with 3-5 **architect observations**: tight coupling clusters, single
|
||||
points of failure, candidates for service extraction, data stores
|
||||
touched by too many writers.
|
||||
|
||||
## Present
|
||||
|
||||
Tell the user to open `analysis/$1/TOPOLOGY.html` in the artifact pane.
|
||||
82
plugins/code-modernization/commands/modernize-reimagine.md
Normal file
82
plugins/code-modernization/commands/modernize-reimagine.md
Normal file
@@ -0,0 +1,82 @@
|
||||
---
|
||||
description: Multi-agent greenfield rebuild — extract specs from legacy, design AI-native, scaffold & validate with HITL
|
||||
argument-hint: <system-dir> <target-vision>
|
||||
---
|
||||
|
||||
**Reimagine** `legacy/$1` as: $2
|
||||
|
||||
This is not a port — it's a rebuild from extracted intent. The legacy system
|
||||
becomes the *specification source*, not the structural template. This command
|
||||
orchestrates a multi-agent team with explicit human checkpoints.
|
||||
|
||||
## Phase A — Specification mining (parallel agents)
|
||||
|
||||
Spawn concurrently and show the user that all three are running:
|
||||
|
||||
1. **business-rules-extractor** — "Extract every business rule from legacy/$1
|
||||
into Given/When/Then form. Output to a structured list I can parse."
|
||||
|
||||
2. **legacy-analyst** — "Catalog every external interface of legacy/$1:
|
||||
inbound (screens, APIs, batch triggers, queues) and outbound (reports,
|
||||
files, downstream calls, DB writes). For each: name, direction, payload
|
||||
shape, frequency/SLA if discernible."
|
||||
|
||||
3. **legacy-analyst** — "Identify the core domain entities in legacy/$1 and
|
||||
their relationships. Return as an entity list + Mermaid erDiagram."
|
||||
|
||||
Collect results. Write `analysis/$1/AI_NATIVE_SPEC.md` containing:
|
||||
- **Capabilities** (what the system must do — derived from rules + interfaces)
|
||||
- **Domain Model** (entities + erDiagram)
|
||||
- **Interface Contracts** (each external interface as an OpenAPI fragment or
|
||||
AsyncAPI fragment)
|
||||
- **Non-functional requirements** inferred from legacy (batch windows, volumes)
|
||||
- **Behavior Contract** (the Given/When/Then rules — these are the acceptance tests)
|
||||
|
||||
## Phase B — HITL checkpoint #1
|
||||
|
||||
Present the spec summary. Ask the user **one focused question**: "Which of
|
||||
these capabilities are P0 for the reimagined system, and are there any we
|
||||
should deliberately drop?" Wait for the answer. Record it in the spec.
|
||||
|
||||
## Phase C — Architecture (single agent, then critique)
|
||||
|
||||
Design the target architecture for "$2":
|
||||
- Mermaid C4 Container diagram
|
||||
- Service boundaries with rationale (which rules/entities live where)
|
||||
- Technology choices with one-line justification each
|
||||
- Data migration approach from legacy stores
|
||||
|
||||
Then spawn **architecture-critic**: "Review this proposed architecture for
|
||||
$2 against the spec in analysis/$1/AI_NATIVE_SPEC.md. Identify over-engineering,
|
||||
missed requirements, scaling risks, and simpler alternatives." Incorporate
|
||||
the critique. Write the result to `analysis/$1/REIMAGINED_ARCHITECTURE.md`.
|
||||
|
||||
## Phase D — HITL checkpoint #2
|
||||
|
||||
Enter plan mode. Present the architecture. Wait for approval.
|
||||
|
||||
## Phase E — Parallel scaffolding
|
||||
|
||||
For each service in the approved architecture (cap at 3 for the demo), spawn
|
||||
a **general-purpose agent in parallel**:
|
||||
|
||||
"Scaffold the <service-name> service per analysis/$1/REIMAGINED_ARCHITECTURE.md
|
||||
and AI_NATIVE_SPEC.md. Create: project skeleton, domain model, API stubs
|
||||
matching the interface contracts, and **executable acceptance tests** for every
|
||||
behavior-contract rule assigned to this service (mark unimplemented ones as
|
||||
expected-failure/skip with the rule ID). Write to modernized/$1-reimagined/<service-name>/."
|
||||
|
||||
Show the agents' progress. When all complete, run the acceptance test suites
|
||||
and report: total tests, passing (scaffolded behavior), pending (rule IDs
|
||||
awaiting implementation).
|
||||
|
||||
## Phase F — Knowledge graph handoff
|
||||
|
||||
Write `modernized/$1-reimagined/CLAUDE.md` — the persistent context file for
|
||||
the new system, containing: architecture summary, service responsibilities,
|
||||
where the spec lives, how to run tests, and the legacy→modern traceability
|
||||
map. This file IS the knowledge graph that future agents and engineers will
|
||||
load.
|
||||
|
||||
Report: services scaffolded, acceptance tests defined, % behaviors with a
|
||||
home, location of all artifacts.
|
||||
78
plugins/code-modernization/commands/modernize-transform.md
Normal file
78
plugins/code-modernization/commands/modernize-transform.md
Normal file
@@ -0,0 +1,78 @@
|
||||
---
|
||||
description: Transform one legacy module to the target stack — idiomatic rewrite with behavior-equivalence tests
|
||||
argument-hint: <system-dir> <module> <target-stack>
|
||||
---
|
||||
|
||||
Transform `legacy/$1` module **`$2`** into **$3**, with proof of behavioral
|
||||
equivalence.
|
||||
|
||||
This is a surgical, single-module transformation — one vertical slice of the
|
||||
strangler fig. Output goes to `modernized/$1/$2/`.
|
||||
|
||||
## Step 0 — Plan (HITL gate)
|
||||
|
||||
Read the source module and any business rules in `analysis/$1/BUSINESS_RULES.md`
|
||||
that reference it. Then **enter plan mode** and present:
|
||||
- Which source files are in scope
|
||||
- The target module structure (packages/classes/files you'll create)
|
||||
- Which business rules / behaviors this module implements
|
||||
- How you'll prove equivalence (test strategy)
|
||||
- Anything ambiguous that needs a human decision NOW
|
||||
|
||||
Wait for approval before writing any code.
|
||||
|
||||
## Step 1 — Characterization tests FIRST
|
||||
|
||||
Before writing target code, spawn the **test-engineer** subagent:
|
||||
|
||||
"Write characterization tests for legacy/$1 module $2. Read the source,
|
||||
identify every observable behavior, and encode each as a test case with
|
||||
concrete input → expected output pairs derived from the legacy logic.
|
||||
Target framework: <appropriate for $3>. Write to
|
||||
`modernized/$1/$2/src/test/`. These tests define 'done' — the new code
|
||||
must pass all of them."
|
||||
|
||||
Show the user the test file. Get a 👍 before proceeding.
|
||||
|
||||
## Step 2 — Idiomatic transformation
|
||||
|
||||
Write the target implementation in `modernized/$1/$2/src/main/`.
|
||||
|
||||
**Critical:** Write code a senior $3 engineer would write from the
|
||||
*specification*, not from the legacy structure. Do NOT mirror COBOL paragraphs
|
||||
as methods, do NOT preserve legacy variable names like `WS-TEMP-AMT-X`.
|
||||
Use the target language's idioms: records/dataclasses, streams, dependency
|
||||
injection, proper error types, etc.
|
||||
|
||||
Include: domain model, service logic, API surface (REST controller or
|
||||
equivalent), and configuration. Add concise Javadoc/docstrings linking each
|
||||
class back to the rule IDs it implements.
|
||||
|
||||
## Step 3 — Prove it
|
||||
|
||||
Run the characterization tests:
|
||||
```bash
|
||||
cd modernized/$1/$2 && <appropriate test command for $3>
|
||||
```
|
||||
Show the output. If anything fails, fix and re-run until green.
|
||||
|
||||
## Step 4 — Side-by-side review
|
||||
|
||||
Generate `modernized/$1/$2/TRANSFORMATION_NOTES.md`:
|
||||
- Mapping table: legacy file:lines → target file:lines, per behavior
|
||||
- Deliberate deviations from legacy behavior (with rationale)
|
||||
- What was NOT migrated (dead code, unreachable branches) and why
|
||||
- Follow-ups for the next module that depends on this one
|
||||
|
||||
Then show a visual diff of one representative behavior, legacy vs modern:
|
||||
```bash
|
||||
delta --side-by-side <(sed -n '<lines>p' legacy/$1/<file>) modernized/$1/$2/src/main/<file>
|
||||
```
|
||||
|
||||
## Step 5 — Architecture review
|
||||
|
||||
Spawn the **architecture-critic** subagent to review the transformed code
|
||||
against $3 best practices. Apply any HIGH-severity feedback; list the rest
|
||||
in TRANSFORMATION_NOTES.md.
|
||||
|
||||
Report: tests passing, lines of legacy retired, location of artifacts.
|
||||
Reference in New Issue
Block a user