pieceに完全移行
This commit is contained in:
parent
2fa1687a50
commit
38d43f2168
@ -37,4 +37,4 @@ npm run test:watch # テスト実行をウォッチ
|
|||||||
- 脆弱性は公開 Issue ではなくメンテナへ直接報告します。
|
- 脆弱性は公開 Issue ではなくメンテナへ直接報告します。
|
||||||
- `.takt/logs/` など機密情報を含む可能性のあるファイルは共有しないでください。
|
- `.takt/logs/` など機密情報を含む可能性のあるファイルは共有しないでください。
|
||||||
- `~/.takt/config.yaml` の `trusted` ディレクトリは最小限にし、不要なパスは登録しないでください。
|
- `~/.takt/config.yaml` の `trusted` ディレクトリは最小限にし、不要なパスは登録しないでください。
|
||||||
- 新しいワークフローを追加する場合は `~/.takt/workflows/` の既存スキーマを踏襲し、不要な拡張を避けます。
|
- 新しいピースを追加する場合は `~/.takt/pieces/` の既存スキーマを踏襲し、不要な拡張を避けます。
|
||||||
|
|||||||
132
CLAUDE.md
132
CLAUDE.md
@ -4,7 +4,7 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
|
|||||||
|
|
||||||
## Project Overview
|
## Project Overview
|
||||||
|
|
||||||
TAKT (Task Agent Koordination Tool) is a multi-agent orchestration system for Claude Code. It enables YAML-based workflow definitions that coordinate multiple AI agents through state machine transitions with rule-based routing.
|
TAKT (Task Agent Koordination Tool) is a multi-agent orchestration system for Claude Code. It enables YAML-based piece definitions that coordinate multiple AI agents through state machine transitions with rule-based routing.
|
||||||
|
|
||||||
## Development Commands
|
## Development Commands
|
||||||
|
|
||||||
@ -23,21 +23,21 @@ TAKT (Task Agent Koordination Tool) is a multi-agent orchestration system for Cl
|
|||||||
|
|
||||||
| Command | Description |
|
| Command | Description |
|
||||||
|---------|-------------|
|
|---------|-------------|
|
||||||
| `takt {task}` | Execute task with current workflow |
|
| `takt {task}` | Execute task with current piece |
|
||||||
| `takt` | Interactive task input mode (chat with AI to refine requirements) |
|
| `takt` | Interactive task input mode (chat with AI to refine requirements) |
|
||||||
| `takt run` | Execute all pending tasks from `.takt/tasks/` once |
|
| `takt run` | Execute all pending tasks from `.takt/tasks/` once |
|
||||||
| `takt watch` | Watch `.takt/tasks/` and auto-execute tasks (resident process) |
|
| `takt watch` | Watch `.takt/tasks/` and auto-execute tasks (resident process) |
|
||||||
| `takt add` | Add a new task via AI conversation |
|
| `takt add` | Add a new task via AI conversation |
|
||||||
| `takt list` | List task branches (try merge, merge & cleanup, or delete) |
|
| `takt list` | List task branches (try merge, merge & cleanup, or delete) |
|
||||||
| `takt switch` | Switch workflow interactively |
|
| `takt switch` | Switch piece interactively |
|
||||||
| `takt clear` | Clear agent conversation sessions (reset state) |
|
| `takt clear` | Clear agent conversation sessions (reset state) |
|
||||||
| `takt eject` | Copy builtin workflow/agents to `~/.takt/` for customization |
|
| `takt eject` | Copy builtin piece/agents to `~/.takt/` for customization |
|
||||||
| `takt config` | Configure settings (permission mode) |
|
| `takt config` | Configure settings (permission mode) |
|
||||||
| `takt --help` | Show help message |
|
| `takt --help` | Show help message |
|
||||||
|
|
||||||
**Interactive mode:** Running `takt` (without arguments) or `takt {initial message}` starts an interactive planning session. The AI helps refine task requirements through conversation. Type `/go` to execute the task with the selected workflow, or `/cancel` to abort. Implemented in `src/features/interactive/`.
|
**Interactive mode:** Running `takt` (without arguments) or `takt {initial message}` starts an interactive planning session. The AI helps refine task requirements through conversation. Type `/go` to execute the task with the selected piece, or `/cancel` to abort. Implemented in `src/features/interactive/`.
|
||||||
|
|
||||||
**Pipeline mode:** Specifying `--pipeline` enables non-interactive mode suitable for CI/CD. Automatically creates a branch, runs the workflow, commits, and pushes. Use `--auto-pr` to also create a pull request. Use `--skip-git` to run workflow only (no git operations). Implemented in `src/features/pipeline/`.
|
**Pipeline mode:** Specifying `--pipeline` enables non-interactive mode suitable for CI/CD. Automatically creates a branch, runs the piece, commits, and pushes. Use `--auto-pr` to also create a pull request. Use `--skip-git` to run piece only (no git operations). Implemented in `src/features/pipeline/`.
|
||||||
|
|
||||||
**GitHub issue references:** `takt #6` fetches issue #6 and executes it as a task.
|
**GitHub issue references:** `takt #6` fetches issue #6 and executes it as a task.
|
||||||
|
|
||||||
@ -48,10 +48,10 @@ TAKT (Task Agent Koordination Tool) is a multi-agent orchestration system for Cl
|
|||||||
| `--pipeline` | Enable pipeline (non-interactive) mode — required for CI/automation |
|
| `--pipeline` | Enable pipeline (non-interactive) mode — required for CI/automation |
|
||||||
| `-t, --task <text>` | Task content (as alternative to GitHub issue) |
|
| `-t, --task <text>` | Task content (as alternative to GitHub issue) |
|
||||||
| `-i, --issue <N>` | GitHub issue number (equivalent to `#N` in interactive mode) |
|
| `-i, --issue <N>` | GitHub issue number (equivalent to `#N` in interactive mode) |
|
||||||
| `-w, --workflow <name or path>` | Workflow name or path to workflow YAML file (v0.3.8+) |
|
| `-w, --piece <name or path>` | Piece name or path to piece YAML file (v0.3.8+) |
|
||||||
| `-b, --branch <name>` | Branch name (auto-generated if omitted) |
|
| `-b, --branch <name>` | Branch name (auto-generated if omitted) |
|
||||||
| `--auto-pr` | Create PR after execution (interactive: skip confirmation, pipeline: enable PR) |
|
| `--auto-pr` | Create PR after execution (interactive: skip confirmation, pipeline: enable PR) |
|
||||||
| `--skip-git` | Skip branch creation, commit, and push (pipeline mode, workflow-only) |
|
| `--skip-git` | Skip branch creation, commit, and push (pipeline mode, piece-only) |
|
||||||
| `--repo <owner/repo>` | Repository for PR creation |
|
| `--repo <owner/repo>` | Repository for PR creation |
|
||||||
| `--create-worktree <yes\|no>` | Skip worktree confirmation prompt |
|
| `--create-worktree <yes\|no>` | Skip worktree confirmation prompt |
|
||||||
| `-q, --quiet` | **Minimal output mode: suppress AI output (for CI)** (v0.3.8+) |
|
| `-q, --quiet` | **Minimal output mode: suppress AI output (for CI)** (v0.3.8+) |
|
||||||
@ -66,7 +66,7 @@ TAKT (Task Agent Koordination Tool) is a multi-agent orchestration system for Cl
|
|||||||
```
|
```
|
||||||
CLI (cli.ts)
|
CLI (cli.ts)
|
||||||
→ Slash commands or executeTask()
|
→ Slash commands or executeTask()
|
||||||
→ WorkflowEngine (workflow/engine.ts)
|
→ PieceEngine (piece/engine.ts)
|
||||||
→ Per step: 3-phase execution
|
→ Per step: 3-phase execution
|
||||||
Phase 1: runAgent() → main work
|
Phase 1: runAgent() → main work
|
||||||
Phase 2: runReportPhase() → report output (if step.report defined)
|
Phase 2: runReportPhase() → report output (if step.report defined)
|
||||||
@ -85,7 +85,7 @@ Each step executes in up to 3 phases (session is resumed across phases):
|
|||||||
| Phase 2 | Report output | Write only | When `step.report` is defined |
|
| Phase 2 | Report output | Write only | When `step.report` is defined |
|
||||||
| Phase 3 | Status judgment | None (judgment only) | When step has tag-based rules |
|
| Phase 3 | Status judgment | None (judgment only) | When step has tag-based rules |
|
||||||
|
|
||||||
Phase 2/3 are implemented in `src/core/workflow/engine/phase-runner.ts`. The session is resumed so the agent retains context from Phase 1.
|
Phase 2/3 are implemented in `src/core/piece/engine/phase-runner.ts`. The session is resumed so the agent retains context from Phase 1.
|
||||||
|
|
||||||
### Rule Evaluation (5-Stage Fallback)
|
### Rule Evaluation (5-Stage Fallback)
|
||||||
|
|
||||||
@ -97,40 +97,40 @@ After step execution, rules are evaluated to determine the next step. Evaluation
|
|||||||
4. **AI judge (ai() only)** - AI evaluates `ai("condition text")` rules
|
4. **AI judge (ai() only)** - AI evaluates `ai("condition text")` rules
|
||||||
5. **AI judge fallback** - AI evaluates ALL conditions as final resort
|
5. **AI judge fallback** - AI evaluates ALL conditions as final resort
|
||||||
|
|
||||||
Implemented in `src/core/workflow/evaluation/RuleEvaluator.ts`. The matched method is tracked as `RuleMatchMethod` type.
|
Implemented in `src/core/piece/evaluation/RuleEvaluator.ts`. The matched method is tracked as `RuleMatchMethod` type.
|
||||||
|
|
||||||
### Key Components
|
### Key Components
|
||||||
|
|
||||||
**WorkflowEngine** (`src/core/workflow/engine/WorkflowEngine.ts`)
|
**PieceEngine** (`src/core/piece/engine/PieceEngine.ts`)
|
||||||
- State machine that orchestrates agent execution via EventEmitter
|
- State machine that orchestrates agent execution via EventEmitter
|
||||||
- Manages step transitions based on rule evaluation results
|
- Manages step transitions based on rule evaluation results
|
||||||
- Emits events: `step:start`, `step:complete`, `step:blocked`, `step:loop_detected`, `workflow:complete`, `workflow:abort`, `iteration:limit`
|
- Emits events: `step:start`, `step:complete`, `step:blocked`, `step:loop_detected`, `piece:complete`, `piece:abort`, `iteration:limit`
|
||||||
- Supports loop detection (`LoopDetector`) and iteration limits
|
- Supports loop detection (`LoopDetector`) and iteration limits
|
||||||
- Maintains agent sessions per step for conversation continuity
|
- Maintains agent sessions per step for conversation continuity
|
||||||
- Delegates to `StepExecutor` (normal steps) and `ParallelRunner` (parallel steps)
|
- Delegates to `StepExecutor` (normal steps) and `ParallelRunner` (parallel steps)
|
||||||
|
|
||||||
**StepExecutor** (`src/core/workflow/engine/StepExecutor.ts`)
|
**StepExecutor** (`src/core/piece/engine/StepExecutor.ts`)
|
||||||
- Executes a single workflow step through the 3-phase model
|
- Executes a single piece step through the 3-phase model
|
||||||
- Phase 1: Main agent execution (with tools)
|
- Phase 1: Main agent execution (with tools)
|
||||||
- Phase 2: Report output (Write-only, optional)
|
- Phase 2: Report output (Write-only, optional)
|
||||||
- Phase 3: Status judgment (no tools, optional)
|
- Phase 3: Status judgment (no tools, optional)
|
||||||
- Builds instructions via `InstructionBuilder`, detects matched rules via `RuleEvaluator`
|
- Builds instructions via `InstructionBuilder`, detects matched rules via `RuleEvaluator`
|
||||||
|
|
||||||
**ParallelRunner** (`src/core/workflow/engine/ParallelRunner.ts`)
|
**ParallelRunner** (`src/core/piece/engine/ParallelRunner.ts`)
|
||||||
- Executes parallel sub-steps concurrently via `Promise.all()`
|
- Executes parallel sub-steps concurrently via `Promise.all()`
|
||||||
- Aggregates sub-step results for parent rule evaluation
|
- Aggregates sub-step results for parent rule evaluation
|
||||||
- Supports `all()` / `any()` aggregate conditions
|
- Supports `all()` / `any()` aggregate conditions
|
||||||
|
|
||||||
**RuleEvaluator** (`src/core/workflow/evaluation/RuleEvaluator.ts`)
|
**RuleEvaluator** (`src/core/piece/evaluation/RuleEvaluator.ts`)
|
||||||
- 5-stage fallback evaluation: aggregate → Phase 3 tag → Phase 1 tag → ai() judge → all-conditions AI judge
|
- 5-stage fallback evaluation: aggregate → Phase 3 tag → Phase 1 tag → ai() judge → all-conditions AI judge
|
||||||
- Returns `RuleMatch` with index and detection method (`aggregate`, `phase3_tag`, `phase1_tag`, `ai_judge`, `ai_fallback`)
|
- Returns `RuleMatch` with index and detection method (`aggregate`, `phase3_tag`, `phase1_tag`, `ai_judge`, `ai_fallback`)
|
||||||
- Fail-fast: throws if rules exist but no rule matched
|
- Fail-fast: throws if rules exist but no rule matched
|
||||||
- **v0.3.8+:** Tag detection now uses **last match** instead of first match when multiple `[STEP:N]` tags appear in output
|
- **v0.3.8+:** Tag detection now uses **last match** instead of first match when multiple `[STEP:N]` tags appear in output
|
||||||
|
|
||||||
**Instruction Builder** (`src/core/workflow/instruction/InstructionBuilder.ts`)
|
**Instruction Builder** (`src/core/piece/instruction/InstructionBuilder.ts`)
|
||||||
- Auto-injects standard sections into every instruction (no need for `{task}` or `{previous_response}` placeholders in templates):
|
- Auto-injects standard sections into every instruction (no need for `{task}` or `{previous_response}` placeholders in templates):
|
||||||
1. Execution context (working dir, edit permission rules)
|
1. Execution context (working dir, edit permission rules)
|
||||||
2. Workflow context (iteration counts, report dir)
|
2. Piece context (iteration counts, report dir)
|
||||||
3. User request (`{task}` — auto-injected unless placeholder present)
|
3. User request (`{task}` — auto-injected unless placeholder present)
|
||||||
4. Previous response (auto-injected if `pass_previous_response: true`)
|
4. Previous response (auto-injected if `pass_previous_response: true`)
|
||||||
5. User inputs (auto-injected unless `{user_inputs}` placeholder present)
|
5. User inputs (auto-injected unless `{user_inputs}` placeholder present)
|
||||||
@ -161,9 +161,9 @@ Implemented in `src/core/workflow/evaluation/RuleEvaluator.ts`. The matched meth
|
|||||||
|
|
||||||
**Configuration** (`src/infra/config/`)
|
**Configuration** (`src/infra/config/`)
|
||||||
- `loaders/loader.ts` - Custom agent loading from `.takt/agents.yaml`
|
- `loaders/loader.ts` - Custom agent loading from `.takt/agents.yaml`
|
||||||
- `loaders/workflowParser.ts` - YAML parsing, step/rule normalization with Zod validation
|
- `loaders/pieceParser.ts` - YAML parsing, step/rule normalization with Zod validation
|
||||||
- `loaders/workflowResolver.ts` - **3-layer resolution with correct priority** (v0.3.8+: user → project → builtin)
|
- `loaders/pieceResolver.ts` - **3-layer resolution with correct priority** (v0.3.8+: user → project → builtin)
|
||||||
- `loaders/workflowCategories.ts` - Workflow categorization and filtering
|
- `loaders/pieceCategories.ts` - Piece categorization and filtering
|
||||||
- `loaders/agentLoader.ts` - Agent prompt file loading
|
- `loaders/agentLoader.ts` - Agent prompt file loading
|
||||||
- `paths.ts` - Directory structure (`.takt/`, `~/.takt/`), session management
|
- `paths.ts` - Directory structure (`.takt/`, `~/.takt/`), session management
|
||||||
- `global/globalConfig.ts` - Global configuration (provider, model, trusted dirs, **quiet mode** v0.3.8+)
|
- `global/globalConfig.ts` - Global configuration (provider, model, trusted dirs, **quiet mode** v0.3.8+)
|
||||||
@ -171,7 +171,7 @@ Implemented in `src/core/workflow/evaluation/RuleEvaluator.ts`. The matched meth
|
|||||||
|
|
||||||
**Task Management** (`src/features/tasks/`)
|
**Task Management** (`src/features/tasks/`)
|
||||||
- `execute/taskExecution.ts` - Main task execution orchestration
|
- `execute/taskExecution.ts` - Main task execution orchestration
|
||||||
- `execute/workflowExecution.ts` - Workflow execution wrapper
|
- `execute/pieceExecution.ts` - Piece execution wrapper
|
||||||
- `add/index.ts` - Interactive task addition via AI conversation
|
- `add/index.ts` - Interactive task addition via AI conversation
|
||||||
- `list/index.ts` - List task branches with merge/delete actions
|
- `list/index.ts` - List task branches with merge/delete actions
|
||||||
- `watch/index.ts` - Watch for task files and auto-execute
|
- `watch/index.ts` - Watch for task files and auto-execute
|
||||||
@ -183,18 +183,18 @@ Implemented in `src/core/workflow/evaluation/RuleEvaluator.ts`. The matched meth
|
|||||||
### Data Flow
|
### Data Flow
|
||||||
|
|
||||||
1. User provides task (text or `#N` issue reference) or slash command → CLI
|
1. User provides task (text or `#N` issue reference) or slash command → CLI
|
||||||
2. CLI loads workflow with **correct priority** (v0.3.8+): user `~/.takt/workflows/` → project `.takt/workflows/` → builtin `resources/global/{lang}/workflows/`
|
2. CLI loads piece with **correct priority** (v0.3.8+): user `~/.takt/pieces/` → project `.takt/pieces/` → builtin `resources/global/{lang}/pieces/`
|
||||||
3. WorkflowEngine starts at `initial_step`
|
3. PieceEngine starts at `initial_step`
|
||||||
4. Each step: `buildInstruction()` → Phase 1 (main) → Phase 2 (report) → Phase 3 (status) → `detectMatchedRule()` → `determineNextStep()`
|
4. Each step: `buildInstruction()` → Phase 1 (main) → Phase 2 (report) → Phase 3 (status) → `detectMatchedRule()` → `determineNextStep()`
|
||||||
5. Rule evaluation determines next step name (v0.3.8+: uses **last match** when multiple `[STEP:N]` tags appear)
|
5. Rule evaluation determines next step name (v0.3.8+: uses **last match** when multiple `[STEP:N]` tags appear)
|
||||||
6. Special transitions: `COMPLETE` ends workflow successfully, `ABORT` ends with failure
|
6. Special transitions: `COMPLETE` ends piece successfully, `ABORT` ends with failure
|
||||||
|
|
||||||
## Directory Structure
|
## Directory Structure
|
||||||
|
|
||||||
```
|
```
|
||||||
~/.takt/ # Global user config (created on first run)
|
~/.takt/ # Global user config (created on first run)
|
||||||
config.yaml # Trusted dirs, default workflow, log level, language
|
config.yaml # Trusted dirs, default piece, log level, language
|
||||||
workflows/ # User workflow YAML files (override builtins)
|
pieces/ # User piece YAML files (override builtins)
|
||||||
agents/ # User agent prompt files (.md)
|
agents/ # User agent prompt files (.md)
|
||||||
|
|
||||||
.takt/ # Project-level config
|
.takt/ # Project-level config
|
||||||
@ -205,16 +205,16 @@ Implemented in `src/core/workflow/evaluation/RuleEvaluator.ts`. The matched meth
|
|||||||
|
|
||||||
resources/ # Bundled defaults (builtin, read from dist/ at runtime)
|
resources/ # Bundled defaults (builtin, read from dist/ at runtime)
|
||||||
global/
|
global/
|
||||||
en/ # English agents and workflows
|
en/ # English agents and pieces
|
||||||
ja/ # Japanese agents and workflows
|
ja/ # Japanese agents and pieces
|
||||||
```
|
```
|
||||||
|
|
||||||
Builtin resources are embedded in the npm package (`dist/resources/`). User files in `~/.takt/` take priority. Use `/eject` to copy builtins to `~/.takt/` for customization.
|
Builtin resources are embedded in the npm package (`dist/resources/`). User files in `~/.takt/` take priority. Use `/eject` to copy builtins to `~/.takt/` for customization.
|
||||||
|
|
||||||
## Workflow YAML Schema
|
## Piece YAML Schema
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
name: workflow-name
|
name: piece-name
|
||||||
description: Optional description
|
description: Optional description
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
initial_step: plan # First step to execute
|
initial_step: plan # First step to execute
|
||||||
@ -288,48 +288,48 @@ Key points about parallel steps:
|
|||||||
| Variable | Description |
|
| Variable | Description |
|
||||||
|----------|-------------|
|
|----------|-------------|
|
||||||
| `{task}` | Original user request (auto-injected if not in template) |
|
| `{task}` | Original user request (auto-injected if not in template) |
|
||||||
| `{iteration}` | Workflow-wide iteration count |
|
| `{iteration}` | Piece-wide iteration count |
|
||||||
| `{max_iterations}` | Maximum iterations allowed |
|
| `{max_iterations}` | Maximum iterations allowed |
|
||||||
| `{step_iteration}` | Per-step iteration count |
|
| `{step_iteration}` | Per-step iteration count |
|
||||||
| `{previous_response}` | Previous step output (auto-injected if not in template) |
|
| `{previous_response}` | Previous step output (auto-injected if not in template) |
|
||||||
| `{user_inputs}` | Accumulated user inputs (auto-injected if not in template) |
|
| `{user_inputs}` | Accumulated user inputs (auto-injected if not in template) |
|
||||||
| `{report_dir}` | Report directory name |
|
| `{report_dir}` | Report directory name |
|
||||||
|
|
||||||
### Workflow Categories
|
### Piece Categories
|
||||||
|
|
||||||
Workflows can be organized into categories for better UI presentation. Categories are configured in:
|
Pieces can be organized into categories for better UI presentation. Categories are configured in:
|
||||||
- `resources/global/{lang}/default-categories.yaml` - Default builtin categories
|
- `resources/global/{lang}/default-categories.yaml` - Default builtin categories
|
||||||
- `~/.takt/config.yaml` - User-defined categories (via `workflow_categories` field)
|
- `~/.takt/config.yaml` - User-defined categories (via `piece_categories` field)
|
||||||
|
|
||||||
Category configuration supports:
|
Category configuration supports:
|
||||||
- Nested categories (unlimited depth)
|
- Nested categories (unlimited depth)
|
||||||
- Per-category workflow lists
|
- Per-category piece lists
|
||||||
- "Others" category for uncategorized workflows (can be disabled via `show_others_category: false`)
|
- "Others" category for uncategorized pieces (can be disabled via `show_others_category: false`)
|
||||||
- Builtin workflow filtering (disable via `builtin_workflows_enabled: false`, or selectively via `disabled_builtins: [name1, name2]`)
|
- Builtin piece filtering (disable via `builtin_pieces_enabled: false`, or selectively via `disabled_builtins: [name1, name2]`)
|
||||||
|
|
||||||
Example category config:
|
Example category config:
|
||||||
```yaml
|
```yaml
|
||||||
workflow_categories:
|
piece_categories:
|
||||||
Development:
|
Development:
|
||||||
workflows: [default, simple]
|
pieces: [default, simple]
|
||||||
children:
|
children:
|
||||||
Backend:
|
Backend:
|
||||||
workflows: [expert-cqrs]
|
pieces: [expert-cqrs]
|
||||||
Frontend:
|
Frontend:
|
||||||
workflows: [expert]
|
pieces: [expert]
|
||||||
Research:
|
Research:
|
||||||
workflows: [research, magi]
|
pieces: [research, magi]
|
||||||
show_others_category: true
|
show_others_category: true
|
||||||
others_category_name: "Other Workflows"
|
others_category_name: "Other Pieces"
|
||||||
```
|
```
|
||||||
|
|
||||||
Implemented in `src/infra/config/loaders/workflowCategories.ts`.
|
Implemented in `src/infra/config/loaders/pieceCategories.ts`.
|
||||||
|
|
||||||
### Model Resolution
|
### Model Resolution
|
||||||
|
|
||||||
Model is resolved in the following priority order:
|
Model is resolved in the following priority order:
|
||||||
|
|
||||||
1. **Workflow step `model`** - Highest priority (specified in step YAML)
|
1. **Piece step `model`** - Highest priority (specified in step YAML)
|
||||||
2. **Custom agent `model`** - Agent-level model in `.takt/agents.yaml`
|
2. **Custom agent `model`** - Agent-level model in `.takt/agents.yaml`
|
||||||
3. **Global config `model`** - Default model in `~/.takt/config.yaml`
|
3. **Global config `model`** - Default model in `~/.takt/config.yaml`
|
||||||
4. **Provider default** - Falls back to provider's default (Claude: sonnet, Codex: gpt-5.2-codex)
|
4. **Provider default** - Falls back to provider's default (Claude: sonnet, Codex: gpt-5.2-codex)
|
||||||
@ -346,11 +346,11 @@ Session logs use NDJSON (`.jsonl`) format for real-time append-only writes. Reco
|
|||||||
|
|
||||||
| Record | Description |
|
| Record | Description |
|
||||||
|--------|-------------|
|
|--------|-------------|
|
||||||
| `workflow_start` | Workflow initialization with task, workflow name |
|
| `piece_start` | Piece initialization with task, piece name |
|
||||||
| `step_start` | Step execution start |
|
| `step_start` | Step execution start |
|
||||||
| `step_complete` | Step result with status, content, matched rule info |
|
| `step_complete` | Step result with status, content, matched rule info |
|
||||||
| `workflow_complete` | Successful completion |
|
| `piece_complete` | Successful completion |
|
||||||
| `workflow_abort` | Abort with reason |
|
| `piece_abort` | Abort with reason |
|
||||||
|
|
||||||
Files: `.takt/logs/{sessionId}.jsonl`, with `latest.json` pointer. Legacy `.json` format is still readable via `loadSessionLog()`.
|
Files: `.takt/logs/{sessionId}.jsonl`, with `latest.json` pointer. Legacy `.json` format is still readable via `loadSessionLog()`.
|
||||||
|
|
||||||
@ -365,26 +365,26 @@ Files: `.takt/logs/{sessionId}.jsonl`, with `latest.json` pointer. Legacy `.json
|
|||||||
|
|
||||||
**Keep commands minimal.** One command per concept. Use arguments/modes instead of multiple similar commands. Before adding a new command, consider if existing commands can be extended.
|
**Keep commands minimal.** One command per concept. Use arguments/modes instead of multiple similar commands. Before adding a new command, consider if existing commands can be extended.
|
||||||
|
|
||||||
**Do NOT expand schemas carelessly.** Rule conditions are free-form text (not enum-restricted). However, the engine's behavior depends on specific patterns (`ai()`, `all()`, `any()`). Do not add new special syntax without updating the loader's regex parsing in `workflowParser.ts`.
|
**Do NOT expand schemas carelessly.** Rule conditions are free-form text (not enum-restricted). However, the engine's behavior depends on specific patterns (`ai()`, `all()`, `any()`). Do not add new special syntax without updating the loader's regex parsing in `pieceParser.ts`.
|
||||||
|
|
||||||
**Instruction auto-injection over explicit placeholders.** The instruction builder auto-injects `{task}`, `{previous_response}`, `{user_inputs}`, and status rules. Templates should contain only step-specific instructions, not boilerplate.
|
**Instruction auto-injection over explicit placeholders.** The instruction builder auto-injects `{task}`, `{previous_response}`, `{user_inputs}`, and status rules. Templates should contain only step-specific instructions, not boilerplate.
|
||||||
|
|
||||||
**Agent prompts contain only domain knowledge.** Agent prompt files (`resources/global/{lang}/agents/**/*.md`) must contain only domain expertise and behavioral principles — never workflow-specific procedures. Workflow-specific details (which reports to read, step routing, specific templates with hardcoded step names) belong in the workflow YAML's `instruction_template`. This keeps agents reusable across different workflows.
|
**Agent prompts contain only domain knowledge.** Agent prompt files (`resources/global/{lang}/agents/**/*.md`) must contain only domain expertise and behavioral principles — never piece-specific procedures. Piece-specific details (which reports to read, step routing, specific templates with hardcoded step names) belong in the piece YAML's `instruction_template`. This keeps agents reusable across different pieces.
|
||||||
|
|
||||||
What belongs in agent prompts:
|
What belongs in agent prompts:
|
||||||
- Role definition ("You are a ... specialist")
|
- Role definition ("You are a ... specialist")
|
||||||
- Domain expertise, review criteria, judgment standards
|
- Domain expertise, review criteria, judgment standards
|
||||||
- Do / Don't behavioral rules
|
- Do / Don't behavioral rules
|
||||||
- Tool usage knowledge (general, not workflow-specific)
|
- Tool usage knowledge (general, not piece-specific)
|
||||||
|
|
||||||
What belongs in workflow `instruction_template`:
|
What belongs in piece `instruction_template`:
|
||||||
- Step-specific procedures ("Read these specific reports")
|
- Step-specific procedures ("Read these specific reports")
|
||||||
- References to other steps or their outputs
|
- References to other steps or their outputs
|
||||||
- Specific report file names or formats
|
- Specific report file names or formats
|
||||||
- Comment/output templates with hardcoded review type names
|
- Comment/output templates with hardcoded review type names
|
||||||
|
|
||||||
**Separation of concerns in workflow engine:**
|
**Separation of concerns in piece engine:**
|
||||||
- `WorkflowEngine` - Orchestration, state management, event emission
|
- `PieceEngine` - Orchestration, state management, event emission
|
||||||
- `StepExecutor` - Single step execution (3-phase model)
|
- `StepExecutor` - Single step execution (3-phase model)
|
||||||
- `ParallelRunner` - Parallel step execution
|
- `ParallelRunner` - Parallel step execution
|
||||||
- `RuleEvaluator` - Rule matching and evaluation
|
- `RuleEvaluator` - Rule matching and evaluation
|
||||||
@ -413,10 +413,10 @@ Key constraints:
|
|||||||
|
|
||||||
**Error handling flow:**
|
**Error handling flow:**
|
||||||
1. Provider error (Claude SDK / Codex) → `AgentResponse.error`
|
1. Provider error (Claude SDK / Codex) → `AgentResponse.error`
|
||||||
2. `StepExecutor` captures error → `WorkflowEngine` emits `step:complete` with error
|
2. `StepExecutor` captures error → `PieceEngine` emits `step:complete` with error
|
||||||
3. Error logged to session log (`.takt/logs/{sessionId}.jsonl`)
|
3. Error logged to session log (`.takt/logs/{sessionId}.jsonl`)
|
||||||
4. Console output shows error details
|
4. Console output shows error details
|
||||||
5. Workflow transitions to `ABORT` step if error is unrecoverable
|
5. Piece transitions to `ABORT` step if error is unrecoverable
|
||||||
|
|
||||||
## Debugging
|
## Debugging
|
||||||
|
|
||||||
@ -429,25 +429,25 @@ Debug logs are written to `.takt/logs/debug.log` (ndjson format). Log levels: `d
|
|||||||
|
|
||||||
**Verbose mode:** Create `.takt/verbose` file (empty file) to enable verbose console output. This automatically enables debug logging and sets log level to `debug`.
|
**Verbose mode:** Create `.takt/verbose` file (empty file) to enable verbose console output. This automatically enables debug logging and sets log level to `debug`.
|
||||||
|
|
||||||
**Session logs:** All workflow executions are logged to `.takt/logs/{sessionId}.jsonl`. Use `tail -f .takt/logs/{sessionId}.jsonl` to monitor in real-time.
|
**Session logs:** All piece executions are logged to `.takt/logs/{sessionId}.jsonl`. Use `tail -f .takt/logs/{sessionId}.jsonl` to monitor in real-time.
|
||||||
|
|
||||||
**Testing with mocks:** Use `--provider mock` to test workflows without calling real AI APIs. Mock responses are deterministic and configurable via test fixtures.
|
**Testing with mocks:** Use `--provider mock` to test pieces without calling real AI APIs. Mock responses are deterministic and configurable via test fixtures.
|
||||||
|
|
||||||
## Testing Notes
|
## Testing Notes
|
||||||
|
|
||||||
- Vitest for testing framework
|
- Vitest for testing framework
|
||||||
- Tests use file system fixtures in `__tests__/` subdirectories
|
- Tests use file system fixtures in `__tests__/` subdirectories
|
||||||
- Mock workflows and agent configs for integration tests
|
- Mock pieces and agent configs for integration tests
|
||||||
- Test single files: `npx vitest run src/__tests__/filename.test.ts`
|
- Test single files: `npx vitest run src/__tests__/filename.test.ts`
|
||||||
- Pattern matching: `npx vitest run -t "test pattern"`
|
- Pattern matching: `npx vitest run -t "test pattern"`
|
||||||
- Integration tests: Tests with `it-` prefix are integration tests that simulate full workflow execution
|
- Integration tests: Tests with `it-` prefix are integration tests that simulate full piece execution
|
||||||
- Engine tests: Tests with `engine-` prefix test specific WorkflowEngine scenarios (happy path, error handling, parallel execution, etc.)
|
- Engine tests: Tests with `engine-` prefix test specific PieceEngine scenarios (happy path, error handling, parallel execution, etc.)
|
||||||
|
|
||||||
## Important Implementation Notes
|
## Important Implementation Notes
|
||||||
|
|
||||||
**Agent prompt resolution:**
|
**Agent prompt resolution:**
|
||||||
- Agent paths in workflow YAML are resolved relative to the workflow file's directory
|
- Agent paths in piece YAML are resolved relative to the piece file's directory
|
||||||
- `../agents/default/coder.md` resolves from workflow file location
|
- `../agents/default/coder.md` resolves from piece file location
|
||||||
- Built-in agents are loaded from `dist/resources/global/{lang}/agents/`
|
- Built-in agents are loaded from `dist/resources/global/{lang}/agents/`
|
||||||
- User agents are loaded from `~/.takt/agents/` or `.takt/agents.yaml`
|
- User agents are loaded from `~/.takt/agents/` or `.takt/agents.yaml`
|
||||||
- If agent file doesn't exist, the agent string is used as inline system prompt
|
- If agent file doesn't exist, the agent string is used as inline system prompt
|
||||||
@ -476,7 +476,7 @@ Debug logs are written to `.takt/logs/debug.log` (ndjson format). Log levels: `d
|
|||||||
- **v0.3.8+:** When multiple `[STEP:N]` tags appear in output, **last match wins** (not first)
|
- **v0.3.8+:** When multiple `[STEP:N]` tags appear in output, **last match wins** (not first)
|
||||||
- `ai()` conditions are evaluated by Claude/Codex, not by string matching
|
- `ai()` conditions are evaluated by Claude/Codex, not by string matching
|
||||||
- Aggregate conditions (`all()`, `any()`) only work in parallel parent steps
|
- Aggregate conditions (`all()`, `any()`) only work in parallel parent steps
|
||||||
- Fail-fast: if rules exist but no rule matches, workflow aborts
|
- Fail-fast: if rules exist but no rule matches, piece aborts
|
||||||
- Interactive-only rules are skipped in pipeline mode (`rule.interactiveOnly === true`)
|
- Interactive-only rules are skipped in pipeline mode (`rule.interactiveOnly === true`)
|
||||||
|
|
||||||
**Provider-specific behavior:**
|
**Provider-specific behavior:**
|
||||||
|
|||||||
150
README.md
150
README.md
@ -4,22 +4,32 @@
|
|||||||
|
|
||||||
**T**ask **A**gent **K**oordination **T**ool - A governance-first orchestrator for running coding agents safely and responsibly
|
**T**ask **A**gent **K**oordination **T**ool - A governance-first orchestrator for running coding agents safely and responsibly
|
||||||
|
|
||||||
TAKT coordinates AI agents like Claude Code and Codex according to your organization's rules and workflows. It clarifies who is responsible, what is permitted, and how to recover from failures, while automating complex development tasks.
|
TAKT coordinates AI agents like Claude Code and Codex according to your organization's rules and pieces. It clarifies who is responsible, what is permitted, and how to recover from failures, while automating complex development tasks.
|
||||||
|
|
||||||
TAKT is built with TAKT itself (dogfooding).
|
TAKT is built with TAKT itself (dogfooding).
|
||||||
|
|
||||||
|
## Metaphor
|
||||||
|
|
||||||
|
TAKT uses a music metaphor to describe orchestration:
|
||||||
|
|
||||||
|
- **Piece**: A task execution definition (what to do and how agents coordinate)
|
||||||
|
- **Movement**: A step inside a piece (a single stage in the flow)
|
||||||
|
- **Orchestration**: The engine that coordinates agents across movements
|
||||||
|
|
||||||
|
You can read every term as standard workflow language (piece = workflow, movement = step), but the metaphor is used to keep the system conceptually consistent.
|
||||||
|
|
||||||
## TAKT is For Teams That Need
|
## TAKT is For Teams That Need
|
||||||
|
|
||||||
- **Want to integrate AI into CI/CD but fear runaway execution** — Clarify control scope with workflow definitions
|
- **Want to integrate AI into CI/CD but fear runaway execution** — Clarify control scope with piece definitions
|
||||||
- **Want automated PR generation but need audit logs** — Record and track all execution history
|
- **Want automated PR generation but need audit logs** — Record and track all execution history
|
||||||
- **Want to use multiple AI models but manage them uniformly** — Control Claude/Codex/Mock with the same workflow
|
- **Want to use multiple AI models but manage them uniformly** — Control Claude/Codex/Mock with the same piece
|
||||||
- **Want to reproduce and debug agent failures** — Maintain complete history with session logs and reports
|
- **Want to reproduce and debug agent failures** — Maintain complete history with session logs and reports
|
||||||
|
|
||||||
## What TAKT is NOT
|
## What TAKT is NOT
|
||||||
|
|
||||||
- **Not an autonomous engineer** — TAKT doesn't complete implementations itself; it governs and coordinates multiple agents
|
- **Not an autonomous engineer** — TAKT doesn't complete implementations itself; it governs and coordinates multiple agents
|
||||||
- **Not competing with Claude Code Swarm** — While leveraging Swarm's execution power, TAKT provides "operational guardrails" such as workflow definitions, permission controls, and audit logs
|
- **Not competing with Claude Code Swarm** — While leveraging Swarm's execution power, TAKT provides "operational guardrails" such as piece definitions, permission controls, and audit logs
|
||||||
- **Not just a workflow engine** — TAKT is designed to address AI-specific challenges (non-determinism, accountability, audit requirements, and reproducibility)
|
- **Not just a piece engine** — TAKT is designed to address AI-specific challenges (non-determinism, accountability, audit requirements, and reproducibility)
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
@ -71,17 +81,17 @@ takt hello
|
|||||||
**Note:** If you specify a string with spaces, Issue reference (`#6`), or `--task` / `--issue` options, interactive mode will be skipped and the task will be executed directly.
|
**Note:** If you specify a string with spaces, Issue reference (`#6`), or `--task` / `--issue` options, interactive mode will be skipped and the task will be executed directly.
|
||||||
|
|
||||||
**Flow:**
|
**Flow:**
|
||||||
1. Select workflow
|
1. Select piece
|
||||||
2. Refine task content through conversation with AI
|
2. Refine task content through conversation with AI
|
||||||
3. Finalize task instructions with `/go` (you can also add additional instructions like `/go additional instructions`)
|
3. Finalize task instructions with `/go` (you can also add additional instructions like `/go additional instructions`)
|
||||||
4. Execute (create worktree, run workflow, create PR)
|
4. Execute (create worktree, run piece, create PR)
|
||||||
|
|
||||||
#### Execution Example
|
#### Execution Example
|
||||||
|
|
||||||
```
|
```
|
||||||
$ takt
|
$ takt
|
||||||
|
|
||||||
Select workflow:
|
Select piece:
|
||||||
❯ 🎼 default (current)
|
❯ 🎼 default (current)
|
||||||
📁 Development/
|
📁 Development/
|
||||||
📁 Research/
|
📁 Research/
|
||||||
@ -110,7 +120,7 @@ Proceed with these task instructions? (Y/n) y
|
|||||||
|
|
||||||
? Create worktree? (Y/n) y
|
? Create worktree? (Y/n) y
|
||||||
|
|
||||||
[Workflow execution starts...]
|
[Piece execution starts...]
|
||||||
```
|
```
|
||||||
|
|
||||||
### Direct Task Execution
|
### Direct Task Execution
|
||||||
@ -124,8 +134,8 @@ takt "Add login feature"
|
|||||||
# Specify task content with --task option
|
# Specify task content with --task option
|
||||||
takt --task "Fix bug"
|
takt --task "Fix bug"
|
||||||
|
|
||||||
# Specify workflow
|
# Specify piece
|
||||||
takt "Add authentication" --workflow expert
|
takt "Add authentication" --piece expert
|
||||||
|
|
||||||
# Auto-create PR
|
# Auto-create PR
|
||||||
takt "Fix bug" --auto-pr
|
takt "Fix bug" --auto-pr
|
||||||
@ -140,8 +150,8 @@ You can execute GitHub Issues directly as tasks. Issue title, body, labels, and
|
|||||||
takt #6
|
takt #6
|
||||||
takt --issue 6
|
takt --issue 6
|
||||||
|
|
||||||
# Issue + workflow specification
|
# Issue + piece specification
|
||||||
takt #6 --workflow expert
|
takt #6 --piece expert
|
||||||
|
|
||||||
# Issue + auto-create PR
|
# Issue + auto-create PR
|
||||||
takt #6 --auto-pr
|
takt #6 --auto-pr
|
||||||
@ -186,7 +196,7 @@ takt list
|
|||||||
|
|
||||||
### Pipeline Mode (for CI/Automation)
|
### Pipeline Mode (for CI/Automation)
|
||||||
|
|
||||||
Specifying `--pipeline` enables non-interactive pipeline mode. Automatically creates branch → runs workflow → commits & pushes. Suitable for CI/CD automation.
|
Specifying `--pipeline` enables non-interactive pipeline mode. Automatically creates branch → runs piece → commits & pushes. Suitable for CI/CD automation.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Execute task in pipeline mode
|
# Execute task in pipeline mode
|
||||||
@ -198,13 +208,13 @@ takt --pipeline --task "Fix bug" --auto-pr
|
|||||||
# Link issue information
|
# Link issue information
|
||||||
takt --pipeline --issue 99 --auto-pr
|
takt --pipeline --issue 99 --auto-pr
|
||||||
|
|
||||||
# Specify workflow and branch
|
# Specify piece and branch
|
||||||
takt --pipeline --task "Fix bug" -w magi -b feat/fix-bug
|
takt --pipeline --task "Fix bug" -w magi -b feat/fix-bug
|
||||||
|
|
||||||
# Specify repository (for PR creation)
|
# Specify repository (for PR creation)
|
||||||
takt --pipeline --task "Fix bug" --auto-pr --repo owner/repo
|
takt --pipeline --task "Fix bug" --auto-pr --repo owner/repo
|
||||||
|
|
||||||
# Workflow execution only (skip branch creation, commit, push)
|
# Piece execution only (skip branch creation, commit, push)
|
||||||
takt --pipeline --task "Fix bug" --skip-git
|
takt --pipeline --task "Fix bug" --skip-git
|
||||||
|
|
||||||
# Minimal output mode (for CI)
|
# Minimal output mode (for CI)
|
||||||
@ -218,10 +228,10 @@ In pipeline mode, PRs are not created unless `--auto-pr` is specified.
|
|||||||
### Other Commands
|
### Other Commands
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Interactively switch workflows
|
# Interactively switch pieces
|
||||||
takt switch
|
takt switch
|
||||||
|
|
||||||
# Copy builtin workflows/agents to ~/.takt/ for customization
|
# Copy builtin pieces/agents to ~/.takt/ for customization
|
||||||
takt eject
|
takt eject
|
||||||
|
|
||||||
# Clear agent conversation sessions
|
# Clear agent conversation sessions
|
||||||
@ -231,13 +241,13 @@ takt clear
|
|||||||
takt config
|
takt config
|
||||||
```
|
```
|
||||||
|
|
||||||
### Recommended Workflows
|
### Recommended Pieces
|
||||||
|
|
||||||
| Workflow | Recommended Use |
|
| Piece | Recommended Use |
|
||||||
|----------|-----------------|
|
|----------|-----------------|
|
||||||
| `default` | Serious development tasks. Used for TAKT's own development. Multi-stage review with parallel reviews (architect + security). |
|
| `default` | Serious development tasks. Used for TAKT's own development. Multi-stage review with parallel reviews (architect + security). |
|
||||||
| `minimal` | Simple fixes and straightforward tasks. Minimal workflow with basic review. |
|
| `minimal` | Simple fixes and straightforward tasks. Minimal piece with basic review. |
|
||||||
| `review-fix-minimal` | Review & fix workflow. Specialized for iterative improvement based on review feedback. |
|
| `review-fix-minimal` | Review & fix piece. Specialized for iterative improvement based on review feedback. |
|
||||||
| `research` | Investigation and research. Autonomously executes research without asking questions. |
|
| `research` | Investigation and research. Autonomously executes research without asking questions. |
|
||||||
|
|
||||||
### Main Options
|
### Main Options
|
||||||
@ -247,23 +257,23 @@ takt config
|
|||||||
| `--pipeline` | **Enable pipeline (non-interactive) mode** — Required for CI/automation |
|
| `--pipeline` | **Enable pipeline (non-interactive) mode** — Required for CI/automation |
|
||||||
| `-t, --task <text>` | Task content (alternative to GitHub Issue) |
|
| `-t, --task <text>` | Task content (alternative to GitHub Issue) |
|
||||||
| `-i, --issue <N>` | GitHub issue number (same as `#N` in interactive mode) |
|
| `-i, --issue <N>` | GitHub issue number (same as `#N` in interactive mode) |
|
||||||
| `-w, --workflow <name or path>` | Workflow name or path to workflow YAML file |
|
| `-w, --piece <name or path>` | Piece name or path to piece YAML file |
|
||||||
| `-b, --branch <name>` | Specify branch name (auto-generated if omitted) |
|
| `-b, --branch <name>` | Specify branch name (auto-generated if omitted) |
|
||||||
| `--auto-pr` | Create PR (interactive: skip confirmation, pipeline: enable PR) |
|
| `--auto-pr` | Create PR (interactive: skip confirmation, pipeline: enable PR) |
|
||||||
| `--skip-git` | Skip branch creation, commit, and push (pipeline mode, workflow-only) |
|
| `--skip-git` | Skip branch creation, commit, and push (pipeline mode, piece-only) |
|
||||||
| `--repo <owner/repo>` | Specify repository (for PR creation) |
|
| `--repo <owner/repo>` | Specify repository (for PR creation) |
|
||||||
| `--create-worktree <yes\|no>` | Skip worktree confirmation prompt |
|
| `--create-worktree <yes\|no>` | Skip worktree confirmation prompt |
|
||||||
| `-q, --quiet` | Minimal output mode: suppress AI output (for CI) |
|
| `-q, --quiet` | Minimal output mode: suppress AI output (for CI) |
|
||||||
| `--provider <name>` | Override agent provider (claude\|codex\|mock) |
|
| `--provider <name>` | Override agent provider (claude\|codex\|mock) |
|
||||||
| `--model <name>` | Override agent model |
|
| `--model <name>` | Override agent model |
|
||||||
|
|
||||||
## Workflows
|
## Pieces
|
||||||
|
|
||||||
TAKT uses YAML-based workflow definitions and rule-based routing. Builtin workflows are embedded in the package, with user workflows in `~/.takt/workflows/` taking priority. Use `takt eject` to copy builtins to `~/.takt/` for customization.
|
TAKT uses YAML-based piece definitions and rule-based routing. Builtin pieces are embedded in the package, with user pieces in `~/.takt/pieces/` taking priority. Use `takt eject` to copy builtins to `~/.takt/` for customization.
|
||||||
|
|
||||||
> **Note (v0.4.0)**: Internal terminology has changed from "step" to "movement" for workflow components. User-facing workflow files remain compatible, but if you customize workflows, you may see `movements:` instead of `steps:` in YAML files. The functionality remains the same.
|
> **Note (v0.4.0)**: Internal terminology has changed from "step" to "movement" for piece components. User-facing piece files remain compatible, but if you customize pieces, you may see `movements:` instead of `steps:` in YAML files. The functionality remains the same.
|
||||||
|
|
||||||
### Workflow Example
|
### Piece Example
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
name: default
|
name: default
|
||||||
@ -370,22 +380,22 @@ Execute sub-movements in parallel within a movement and evaluate with aggregate
|
|||||||
| AI judge | `ai("condition text")` | AI evaluates condition against agent output |
|
| AI judge | `ai("condition text")` | AI evaluates condition against agent output |
|
||||||
| Aggregate | `all("X")` / `any("X")` | Aggregates parallel sub-movement matched conditions |
|
| Aggregate | `all("X")` / `any("X")` | Aggregates parallel sub-movement matched conditions |
|
||||||
|
|
||||||
## Builtin Workflows
|
## Builtin Pieces
|
||||||
|
|
||||||
TAKT includes multiple builtin workflows:
|
TAKT includes multiple builtin pieces:
|
||||||
|
|
||||||
| Workflow | Description |
|
| Piece | Description |
|
||||||
|----------|-------------|
|
|----------|-------------|
|
||||||
| `default` | Full development workflow: plan → architecture design → implement → AI review → parallel review (architect + security) → supervisor approval. Includes fix loops at each review stage. |
|
| `default` | Full development piece: plan → architecture design → implement → AI review → parallel review (architect + security) → supervisor approval. Includes fix loops at each review stage. |
|
||||||
| `minimal` | Quick workflow: plan → implement → review → supervisor. Minimal steps for fast iteration. |
|
| `minimal` | Quick piece: plan → implement → review → supervisor. Minimal steps for fast iteration. |
|
||||||
| `review-fix-minimal` | Review-focused workflow: review → fix → supervisor. For iterative improvement based on review feedback. |
|
| `review-fix-minimal` | Review-focused piece: review → fix → supervisor. For iterative improvement based on review feedback. |
|
||||||
| `research` | Research workflow: planner → digger → supervisor. Autonomously executes research without asking questions. |
|
| `research` | Research piece: planner → digger → supervisor. Autonomously executes research without asking questions. |
|
||||||
| `expert` | Full-stack development workflow: architecture, frontend, security, QA reviews with fix loops. |
|
| `expert` | Full-stack development piece: architecture, frontend, security, QA reviews with fix loops. |
|
||||||
| `expert-cqrs` | Full-stack development workflow (CQRS+ES specialized): CQRS+ES, frontend, security, QA reviews with fix loops. |
|
| `expert-cqrs` | Full-stack development piece (CQRS+ES specialized): CQRS+ES, frontend, security, QA reviews with fix loops. |
|
||||||
| `magi` | Deliberation system inspired by Evangelion. Three AI personas (MELCHIOR, BALTHASAR, CASPER) analyze and vote. |
|
| `magi` | Deliberation system inspired by Evangelion. Three AI personas (MELCHIOR, BALTHASAR, CASPER) analyze and vote. |
|
||||||
| `review-only` | Read-only code review workflow that makes no changes. |
|
| `review-only` | Read-only code review piece that makes no changes. |
|
||||||
|
|
||||||
Use `takt switch` to switch workflows.
|
Use `takt switch` to switch pieces.
|
||||||
|
|
||||||
## Builtin Agents
|
## Builtin Agents
|
||||||
|
|
||||||
@ -415,7 +425,7 @@ You are a code reviewer specialized in security.
|
|||||||
|
|
||||||
## Model Selection
|
## Model Selection
|
||||||
|
|
||||||
The `model` field (in workflow movements, agent config, or global config) is passed directly to the provider (Claude Code CLI / Codex SDK). TAKT does not resolve model aliases.
|
The `model` field (in piece movements, agent config, or global config) is passed directly to the provider (Claude Code CLI / Codex SDK). TAKT does not resolve model aliases.
|
||||||
|
|
||||||
### Claude Code
|
### Claude Code
|
||||||
|
|
||||||
@ -429,14 +439,14 @@ The model string is passed to the Codex SDK. If unspecified, defaults to `codex`
|
|||||||
|
|
||||||
```
|
```
|
||||||
~/.takt/ # Global configuration directory
|
~/.takt/ # Global configuration directory
|
||||||
├── config.yaml # Global config (provider, model, workflow, etc.)
|
├── config.yaml # Global config (provider, model, piece, etc.)
|
||||||
├── workflows/ # User workflow definitions (override builtins)
|
├── pieces/ # User piece definitions (override builtins)
|
||||||
│ └── custom.yaml
|
│ └── custom.yaml
|
||||||
└── agents/ # User agent prompt files (.md)
|
└── agents/ # User agent prompt files (.md)
|
||||||
└── my-agent.md
|
└── my-agent.md
|
||||||
|
|
||||||
.takt/ # Project-level configuration
|
.takt/ # Project-level configuration
|
||||||
├── config.yaml # Project config (current workflow, etc.)
|
├── config.yaml # Project config (current piece, etc.)
|
||||||
├── tasks/ # Pending task files (.yaml, .md)
|
├── tasks/ # Pending task files (.yaml, .md)
|
||||||
├── completed/ # Completed tasks and reports
|
├── completed/ # Completed tasks and reports
|
||||||
├── reports/ # Execution reports (auto-generated)
|
├── reports/ # Execution reports (auto-generated)
|
||||||
@ -444,7 +454,7 @@ The model string is passed to the Codex SDK. If unspecified, defaults to `codex`
|
|||||||
└── logs/ # NDJSON format session logs
|
└── logs/ # NDJSON format session logs
|
||||||
├── latest.json # Pointer to current/latest session
|
├── latest.json # Pointer to current/latest session
|
||||||
├── previous.json # Pointer to previous session
|
├── previous.json # Pointer to previous session
|
||||||
└── {sessionId}.jsonl # NDJSON session log per workflow execution
|
└── {sessionId}.jsonl # NDJSON session log per piece execution
|
||||||
```
|
```
|
||||||
|
|
||||||
Builtin resources are embedded in the npm package (`dist/resources/`). User files in `~/.takt/` take priority.
|
Builtin resources are embedded in the npm package (`dist/resources/`). User files in `~/.takt/` take priority.
|
||||||
@ -456,7 +466,7 @@ Configure default provider and model in `~/.takt/config.yaml`:
|
|||||||
```yaml
|
```yaml
|
||||||
# ~/.takt/config.yaml
|
# ~/.takt/config.yaml
|
||||||
language: en
|
language: en
|
||||||
default_workflow: default
|
default_piece: default
|
||||||
log_level: info
|
log_level: info
|
||||||
provider: claude # Default provider: claude or codex
|
provider: claude # Default provider: claude or codex
|
||||||
model: sonnet # Default model (optional)
|
model: sonnet # Default model (optional)
|
||||||
@ -505,10 +515,10 @@ Priority: Environment variables > `config.yaml` settings
|
|||||||
| `{title}` | Commit message | Issue title |
|
| `{title}` | Commit message | Issue title |
|
||||||
| `{issue}` | Commit message, PR body | Issue number |
|
| `{issue}` | Commit message, PR body | Issue number |
|
||||||
| `{issue_body}` | PR body | Issue body |
|
| `{issue_body}` | PR body | Issue body |
|
||||||
| `{report}` | PR body | Workflow execution report |
|
| `{report}` | PR body | Piece execution report |
|
||||||
|
|
||||||
**Model Resolution Priority:**
|
**Model Resolution Priority:**
|
||||||
1. Workflow movement `model` (highest priority)
|
1. Piece movement `model` (highest priority)
|
||||||
2. Custom agent `model`
|
2. Custom agent `model`
|
||||||
3. Global config `model`
|
3. Global config `model`
|
||||||
4. Provider default (Claude: sonnet, Codex: codex)
|
4. Provider default (Claude: sonnet, Codex: codex)
|
||||||
@ -519,14 +529,14 @@ Priority: Environment variables > `config.yaml` settings
|
|||||||
|
|
||||||
TAKT supports batch processing with task files in `.takt/tasks/`. Both `.yaml`/`.yml` and `.md` file formats are supported.
|
TAKT supports batch processing with task files in `.takt/tasks/`. Both `.yaml`/`.yml` and `.md` file formats are supported.
|
||||||
|
|
||||||
**YAML format** (recommended, supports worktree/branch/workflow options):
|
**YAML format** (recommended, supports worktree/branch/piece options):
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
# .takt/tasks/add-auth.yaml
|
# .takt/tasks/add-auth.yaml
|
||||||
task: "Add authentication feature"
|
task: "Add authentication feature"
|
||||||
worktree: true # Execute in isolated shared clone
|
worktree: true # Execute in isolated shared clone
|
||||||
branch: "feat/add-auth" # Branch name (auto-generated if omitted)
|
branch: "feat/add-auth" # Branch name (auto-generated if omitted)
|
||||||
workflow: "default" # Workflow specification (uses current if omitted)
|
piece: "default" # Piece specification (uses current if omitted)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Markdown format** (simple, backward compatible):
|
**Markdown format** (simple, backward compatible):
|
||||||
@ -561,25 +571,25 @@ TAKT writes session logs in NDJSON (`.jsonl`) format to `.takt/logs/`. Each reco
|
|||||||
|
|
||||||
- `.takt/logs/latest.json` - Pointer to current (or latest) session
|
- `.takt/logs/latest.json` - Pointer to current (or latest) session
|
||||||
- `.takt/logs/previous.json` - Pointer to previous session
|
- `.takt/logs/previous.json` - Pointer to previous session
|
||||||
- `.takt/logs/{sessionId}.jsonl` - NDJSON session log per workflow execution
|
- `.takt/logs/{sessionId}.jsonl` - NDJSON session log per piece execution
|
||||||
|
|
||||||
Record types: `workflow_start`, `step_start`, `step_complete`, `workflow_complete`, `workflow_abort`
|
Record types: `piece_start`, `step_start`, `step_complete`, `piece_complete`, `piece_abort`
|
||||||
|
|
||||||
Agents can read `previous.json` to inherit context from the previous execution. Session continuation is automatic — just run `takt "task"` to continue from the previous session.
|
Agents can read `previous.json` to inherit context from the previous execution. Session continuation is automatic — just run `takt "task"` to continue from the previous session.
|
||||||
|
|
||||||
### Adding Custom Workflows
|
### Adding Custom Pieces
|
||||||
|
|
||||||
Add YAML files to `~/.takt/workflows/` or customize builtins with `takt eject`:
|
Add YAML files to `~/.takt/pieces/` or customize builtins with `takt eject`:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Copy default workflow to ~/.takt/workflows/ and edit
|
# Copy default piece to ~/.takt/pieces/ and edit
|
||||||
takt eject default
|
takt eject default
|
||||||
```
|
```
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
# ~/.takt/workflows/my-workflow.yaml
|
# ~/.takt/pieces/my-piece.yaml
|
||||||
name: my-workflow
|
name: my-piece
|
||||||
description: Custom workflow
|
description: Custom piece
|
||||||
max_iterations: 5
|
max_iterations: 5
|
||||||
initial_movement: analyze
|
initial_movement: analyze
|
||||||
|
|
||||||
@ -609,10 +619,10 @@ movements:
|
|||||||
|
|
||||||
### Specifying Agents by Path
|
### Specifying Agents by Path
|
||||||
|
|
||||||
In workflow definitions, specify agents using file paths:
|
In piece definitions, specify agents using file paths:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
# Relative path from workflow file
|
# Relative path from piece file
|
||||||
agent: ../agents/default/coder.md
|
agent: ../agents/default/coder.md
|
||||||
|
|
||||||
# Home directory
|
# Home directory
|
||||||
@ -622,24 +632,24 @@ agent: ~/.takt/agents/default/coder.md
|
|||||||
agent: /path/to/custom/agent.md
|
agent: /path/to/custom/agent.md
|
||||||
```
|
```
|
||||||
|
|
||||||
### Workflow Variables
|
### Piece Variables
|
||||||
|
|
||||||
Variables available in `instruction_template`:
|
Variables available in `instruction_template`:
|
||||||
|
|
||||||
| Variable | Description |
|
| Variable | Description |
|
||||||
|----------|-------------|
|
|----------|-------------|
|
||||||
| `{task}` | Original user request (auto-injected if not in template) |
|
| `{task}` | Original user request (auto-injected if not in template) |
|
||||||
| `{iteration}` | Workflow-wide turn count (total steps executed) |
|
| `{iteration}` | Piece-wide turn count (total steps executed) |
|
||||||
| `{max_iterations}` | Maximum iteration count |
|
| `{max_iterations}` | Maximum iteration count |
|
||||||
| `{movement_iteration}` | Per-movement iteration count (times this movement has been executed) |
|
| `{movement_iteration}` | Per-movement iteration count (times this movement has been executed) |
|
||||||
| `{previous_response}` | Output from previous movement (auto-injected if not in template) |
|
| `{previous_response}` | Output from previous movement (auto-injected if not in template) |
|
||||||
| `{user_inputs}` | Additional user inputs during workflow (auto-injected if not in template) |
|
| `{user_inputs}` | Additional user inputs during piece (auto-injected if not in template) |
|
||||||
| `{report_dir}` | Report directory path (e.g., `.takt/reports/20250126-143052-task-summary`) |
|
| `{report_dir}` | Report directory path (e.g., `.takt/reports/20250126-143052-task-summary`) |
|
||||||
| `{report:filename}` | Expands to `{report_dir}/filename` (e.g., `{report:00-plan.md}`) |
|
| `{report:filename}` | Expands to `{report_dir}/filename` (e.g., `{report:00-plan.md}`) |
|
||||||
|
|
||||||
### Workflow Design
|
### Piece Design
|
||||||
|
|
||||||
Elements needed for each workflow movement:
|
Elements needed for each piece movement:
|
||||||
|
|
||||||
**1. Agent** - Markdown file containing system prompt:
|
**1. Agent** - Markdown file containing system prompt:
|
||||||
|
|
||||||
@ -675,13 +685,13 @@ Special `next` values: `COMPLETE` (success), `ABORT` (failure)
|
|||||||
## API Usage Example
|
## API Usage Example
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
import { WorkflowEngine, loadWorkflow } from 'takt'; // npm install takt
|
import { PieceEngine, loadPiece } from 'takt'; // npm install takt
|
||||||
|
|
||||||
const config = loadWorkflow('default');
|
const config = loadPiece('default');
|
||||||
if (!config) {
|
if (!config) {
|
||||||
throw new Error('Workflow not found');
|
throw new Error('Piece not found');
|
||||||
}
|
}
|
||||||
const engine = new WorkflowEngine(config, process.cwd(), 'My task');
|
const engine = new PieceEngine(config, process.cwd(), 'My task');
|
||||||
|
|
||||||
engine.on('step:complete', (step, response) => {
|
engine.on('step:complete', (step, response) => {
|
||||||
console.log(`${step.name}: ${response.status}`);
|
console.log(`${step.name}: ${response.status}`);
|
||||||
@ -700,7 +710,7 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md) for details.
|
|||||||
|
|
||||||
TAKT provides a GitHub Action for automating PR reviews and task execution. See [takt-action](https://github.com/nrslib/takt-action) for details.
|
TAKT provides a GitHub Action for automating PR reviews and task execution. See [takt-action](https://github.com/nrslib/takt-action) for details.
|
||||||
|
|
||||||
**Workflow example** (see [.github/workflows/takt-action.yml](../.github/workflows/takt-action.yml) in this repository):
|
**Piece example** (see [.github/workflows/takt-action.yml](../.github/workflows/takt-action.yml) in this repository):
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
name: TAKT
|
name: TAKT
|
||||||
@ -755,7 +765,7 @@ export TAKT_OPENAI_API_KEY=sk-...
|
|||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
- [Workflow Guide](./docs/workflows.md) - Creating and customizing workflows
|
- [Piece Guide](./docs/pieces.md) - Creating and customizing pieces
|
||||||
- [Agent Guide](./docs/agents.md) - Configuring custom agents
|
- [Agent Guide](./docs/agents.md) - Configuring custom agents
|
||||||
- [Changelog](../CHANGELOG.md) - Version history
|
- [Changelog](../CHANGELOG.md) - Version history
|
||||||
- [Security Policy](../SECURITY.md) - Vulnerability reporting
|
- [Security Policy](../SECURITY.md) - Vulnerability reporting
|
||||||
|
|||||||
@ -39,12 +39,12 @@ TAKT orchestrates AI agents that can execute code and access files. Users should
|
|||||||
|
|
||||||
- **Trusted Directories**: TAKT requires explicit configuration of trusted directories in `~/.takt/config.yaml`
|
- **Trusted Directories**: TAKT requires explicit configuration of trusted directories in `~/.takt/config.yaml`
|
||||||
- **Agent Permissions**: Agents have access to tools like Bash, Edit, Write based on their configuration
|
- **Agent Permissions**: Agents have access to tools like Bash, Edit, Write based on their configuration
|
||||||
- **Workflow Definitions**: Only use workflow files from trusted sources
|
- **Piece Definitions**: Only use piece files from trusted sources
|
||||||
- **Session Logs**: Session logs in `.takt/logs/` may contain sensitive information
|
- **Session Logs**: Session logs in `.takt/logs/` may contain sensitive information
|
||||||
|
|
||||||
### Best Practices
|
### Best Practices
|
||||||
|
|
||||||
1. Review workflow YAML files before using them
|
1. Review piece YAML files before using them
|
||||||
2. Keep TAKT updated to the latest version
|
2. Keep TAKT updated to the latest version
|
||||||
3. Limit trusted directories to necessary paths only
|
3. Limit trusted directories to necessary paths only
|
||||||
4. Be cautious when using custom agents from untrusted sources
|
4. Be cautious when using custom agents from untrusted sources
|
||||||
|
|||||||
@ -2,22 +2,30 @@
|
|||||||
|
|
||||||
**T**ask **A**gent **K**oordination **T**ool - AIエージェントを「安全に」「責任を持って」運用するための協調制御システム
|
**T**ask **A**gent **K**oordination **T**ool - AIエージェントを「安全に」「責任を持って」運用するための協調制御システム
|
||||||
|
|
||||||
TAKTは、Claude CodeやCodexなどのAIエージェントを、組織のルールとワークフローに従って協調させます。誰が責任を持つか・どこまで許可するか・失敗時にどう戻すか を明確にしながら、複雑な開発タスクを自動化します。
|
TAKTは、Claude CodeやCodexなどのAIエージェントを、組織のルールとピースに従って協調させます。誰が責任を持つか・どこまで許可するか・失敗時にどう戻すか を明確にしながら、複雑な開発タスクを自動化します。
|
||||||
|
|
||||||
TAKTはTAKT自身で開発されています(ドッグフーディング)。
|
TAKTはTAKT自身で開発されています(ドッグフーディング)。
|
||||||
|
|
||||||
|
## メタファ
|
||||||
|
|
||||||
|
TAKTはオーケストラをイメージした音楽メタファで用語を統一しています。
|
||||||
|
|
||||||
|
- **Piece**: タスク実行定義(何をどう協調させるか)
|
||||||
|
- **Movement**: ピース内の1ステップ(実行フローの1段階)
|
||||||
|
- **Orchestration**: ムーブメント間でエージェントを協調させるエンジン
|
||||||
|
|
||||||
## TAKTが向いているチーム
|
## TAKTが向いているチーム
|
||||||
|
|
||||||
- **CI/CDにAIを組み込みたいが、暴走が怖い** — ワークフロー定義で制御範囲を明確化
|
- **CI/CDにAIを組み込みたいが、暴走が怖い** — ピース定義で制御範囲を明確化
|
||||||
- **PRの自動生成をしたいが、監査ログが必要** — 全ての実行履歴を記録・追跡可能
|
- **PRの自動生成をしたいが、監査ログが必要** — 全ての実行履歴を記録・追跡可能
|
||||||
- **複数のAIモデルを使い分けたいが、統一的に管理したい** — Claude/Codex/モックを同じワークフローで制御
|
- **複数のAIモデルを使い分けたいが、統一的に管理したい** — Claude/Codex/モックを同じピースで制御
|
||||||
- **エージェントの失敗を再現・デバッグしたい** — セッションログとレポートで完全な履歴を保持
|
- **エージェントの失敗を再現・デバッグしたい** — セッションログとレポートで完全な履歴を保持
|
||||||
|
|
||||||
## TAKTとは何でないか
|
## TAKTとは何でないか
|
||||||
|
|
||||||
- **自律型AIエンジニアの代替ではありません** — TAKT自身が実装を完結するのではなく、複数のエージェントを統治・協調させます
|
- **自律型AIエンジニアの代替ではありません** — TAKT自身が実装を完結するのではなく、複数のエージェントを統治・協調させます
|
||||||
- **Claude Code Swarmの競合ではありません** — Swarmの実行力を活かしつつ、TAKTはワークフロー/権限/監査ログなど「運用のガードレール」を提供します
|
- **Claude Code Swarmの競合ではありません** — Swarmの実行力を活かしつつ、TAKTはピース/権限/監査ログなど「運用のガードレール」を提供します
|
||||||
- **単なるワークフローエンジンではありません** — 非決定性、責任所在、監査要件、再現性といったAI特有の課題に対応した設計です
|
- **単なるピースエンジンではありません** — 非決定性、責任所在、監査要件、再現性といったAI特有の課題に対応した設計です
|
||||||
|
|
||||||
## 必要条件
|
## 必要条件
|
||||||
|
|
||||||
@ -69,17 +77,17 @@ takt hello
|
|||||||
**注意:** スペースを含む文字列や Issue 参照(`#6`)、`--task` / `--issue` オプションを指定すると、対話モードをスキップして直接タスク実行されます。
|
**注意:** スペースを含む文字列や Issue 参照(`#6`)、`--task` / `--issue` オプションを指定すると、対話モードをスキップして直接タスク実行されます。
|
||||||
|
|
||||||
**フロー:**
|
**フロー:**
|
||||||
1. ワークフロー選択
|
1. ピース選択
|
||||||
2. AI との会話でタスク内容を整理
|
2. AI との会話でタスク内容を整理
|
||||||
3. `/go` でタスク指示を確定(`/go 追加の指示` のように指示を追加することも可能)
|
3. `/go` でタスク指示を確定(`/go 追加の指示` のように指示を追加することも可能)
|
||||||
4. 実行(worktree 作成、ワークフロー実行、PR 作成)
|
4. 実行(worktree 作成、ピース実行、PR 作成)
|
||||||
|
|
||||||
#### 実行例
|
#### 実行例
|
||||||
|
|
||||||
```
|
```
|
||||||
$ takt
|
$ takt
|
||||||
|
|
||||||
Select workflow:
|
Select piece:
|
||||||
❯ 🎼 default (current)
|
❯ 🎼 default (current)
|
||||||
📁 Development/
|
📁 Development/
|
||||||
📁 Research/
|
📁 Research/
|
||||||
@ -108,7 +116,7 @@ Select workflow:
|
|||||||
|
|
||||||
? Create worktree? (Y/n) y
|
? Create worktree? (Y/n) y
|
||||||
|
|
||||||
[ワークフロー実行開始...]
|
[ピース実行開始...]
|
||||||
```
|
```
|
||||||
|
|
||||||
### 直接タスク実行
|
### 直接タスク実行
|
||||||
@ -122,8 +130,8 @@ takt "ログイン機能を追加する"
|
|||||||
# --task オプションでタスク内容を指定
|
# --task オプションでタスク内容を指定
|
||||||
takt --task "バグを修正"
|
takt --task "バグを修正"
|
||||||
|
|
||||||
# ワークフロー指定
|
# ピース指定
|
||||||
takt "認証機能を追加" --workflow expert
|
takt "認証機能を追加" --piece expert
|
||||||
|
|
||||||
# PR 自動作成
|
# PR 自動作成
|
||||||
takt "バグを修正" --auto-pr
|
takt "バグを修正" --auto-pr
|
||||||
@ -138,8 +146,8 @@ GitHub Issue を直接タスクとして実行できます。Issue のタイト
|
|||||||
takt #6
|
takt #6
|
||||||
takt --issue 6
|
takt --issue 6
|
||||||
|
|
||||||
# Issue + ワークフロー指定
|
# Issue + ピース指定
|
||||||
takt #6 --workflow expert
|
takt #6 --piece expert
|
||||||
|
|
||||||
# Issue + PR自動作成
|
# Issue + PR自動作成
|
||||||
takt #6 --auto-pr
|
takt #6 --auto-pr
|
||||||
@ -184,7 +192,7 @@ takt list
|
|||||||
|
|
||||||
### パイプラインモード(CI/自動化向け)
|
### パイプラインモード(CI/自動化向け)
|
||||||
|
|
||||||
`--pipeline` を指定すると非対話のパイプラインモードに入ります。ブランチ作成 → ワークフロー実行 → commit & push を自動で行います。CI/CD での自動化に適しています。
|
`--pipeline` を指定すると非対話のパイプラインモードに入ります。ブランチ作成 → ピース実行 → commit & push を自動で行います。CI/CD での自動化に適しています。
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# タスクをパイプライン実行
|
# タスクをパイプライン実行
|
||||||
@ -196,13 +204,13 @@ takt --pipeline --task "バグを修正" --auto-pr
|
|||||||
# Issue情報を紐付け
|
# Issue情報を紐付け
|
||||||
takt --pipeline --issue 99 --auto-pr
|
takt --pipeline --issue 99 --auto-pr
|
||||||
|
|
||||||
# ワークフロー・ブランチ指定
|
# ピース・ブランチ指定
|
||||||
takt --pipeline --task "バグを修正" -w magi -b feat/fix-bug
|
takt --pipeline --task "バグを修正" -w magi -b feat/fix-bug
|
||||||
|
|
||||||
# リポジトリ指定(PR作成時)
|
# リポジトリ指定(PR作成時)
|
||||||
takt --pipeline --task "バグを修正" --auto-pr --repo owner/repo
|
takt --pipeline --task "バグを修正" --auto-pr --repo owner/repo
|
||||||
|
|
||||||
# ワークフロー実行のみ(ブランチ作成・commit・pushをスキップ)
|
# ピース実行のみ(ブランチ作成・commit・pushをスキップ)
|
||||||
takt --pipeline --task "バグを修正" --skip-git
|
takt --pipeline --task "バグを修正" --skip-git
|
||||||
|
|
||||||
# 最小限の出力モード(CI向け)
|
# 最小限の出力モード(CI向け)
|
||||||
@ -216,10 +224,10 @@ takt --pipeline --task "バグを修正" --quiet
|
|||||||
### その他のコマンド
|
### その他のコマンド
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# ワークフローを対話的に切り替え
|
# ピースを対話的に切り替え
|
||||||
takt switch
|
takt switch
|
||||||
|
|
||||||
# ビルトインのワークフロー/エージェントを~/.takt/にコピーしてカスタマイズ
|
# ビルトインのピース/エージェントを~/.takt/にコピーしてカスタマイズ
|
||||||
takt eject
|
takt eject
|
||||||
|
|
||||||
# エージェントの会話セッションをクリア
|
# エージェントの会話セッションをクリア
|
||||||
@ -229,13 +237,13 @@ takt clear
|
|||||||
takt config
|
takt config
|
||||||
```
|
```
|
||||||
|
|
||||||
### おすすめワークフロー
|
### おすすめピース
|
||||||
|
|
||||||
| ワークフロー | おすすめ用途 |
|
| ピース | おすすめ用途 |
|
||||||
|------------|------------|
|
|------------|------------|
|
||||||
| `default` | 本格的な開発タスク。TAKT自身の開発で使用。アーキテクト+セキュリティの並列レビュー付き多段階レビュー。 |
|
| `default` | 本格的な開発タスク。TAKT自身の開発で使用。アーキテクト+セキュリティの並列レビュー付き多段階レビュー。 |
|
||||||
| `minimal` | 簡単な修正やシンプルなタスク。基本的なレビュー付きの最小限のワークフロー。 |
|
| `minimal` | 簡単な修正やシンプルなタスク。基本的なレビュー付きの最小限のピース。 |
|
||||||
| `review-fix-minimal` | レビュー&修正ワークフロー。レビューフィードバックに基づく反復的な改善に特化。 |
|
| `review-fix-minimal` | レビュー&修正ピース。レビューフィードバックに基づく反復的な改善に特化。 |
|
||||||
| `research` | 調査・リサーチ。質問せずに自律的にリサーチを実行。 |
|
| `research` | 調査・リサーチ。質問せずに自律的にリサーチを実行。 |
|
||||||
|
|
||||||
### 主要なオプション
|
### 主要なオプション
|
||||||
@ -245,23 +253,23 @@ takt config
|
|||||||
| `--pipeline` | **パイプライン(非対話)モードを有効化** — CI/自動化に必須 |
|
| `--pipeline` | **パイプライン(非対話)モードを有効化** — CI/自動化に必須 |
|
||||||
| `-t, --task <text>` | タスク内容(GitHub Issueの代わり) |
|
| `-t, --task <text>` | タスク内容(GitHub Issueの代わり) |
|
||||||
| `-i, --issue <N>` | GitHub Issue番号(対話モードでは `#N` と同じ) |
|
| `-i, --issue <N>` | GitHub Issue番号(対話モードでは `#N` と同じ) |
|
||||||
| `-w, --workflow <name or path>` | ワークフロー名、またはワークフローYAMLファイルのパス |
|
| `-w, --piece <name or path>` | ピース名、またはピースYAMLファイルのパス |
|
||||||
| `-b, --branch <name>` | ブランチ名指定(省略時は自動生成) |
|
| `-b, --branch <name>` | ブランチ名指定(省略時は自動生成) |
|
||||||
| `--auto-pr` | PR作成(対話: 確認スキップ、パイプライン: PR有効化) |
|
| `--auto-pr` | PR作成(対話: 確認スキップ、パイプライン: PR有効化) |
|
||||||
| `--skip-git` | ブランチ作成・commit・pushをスキップ(パイプラインモード、ワークフロー実行のみ) |
|
| `--skip-git` | ブランチ作成・commit・pushをスキップ(パイプラインモード、ピース実行のみ) |
|
||||||
| `--repo <owner/repo>` | リポジトリ指定(PR作成時) |
|
| `--repo <owner/repo>` | リポジトリ指定(PR作成時) |
|
||||||
| `--create-worktree <yes\|no>` | worktree確認プロンプトをスキップ |
|
| `--create-worktree <yes\|no>` | worktree確認プロンプトをスキップ |
|
||||||
| `-q, --quiet` | 最小限の出力モード: AIの出力を抑制(CI向け) |
|
| `-q, --quiet` | 最小限の出力モード: AIの出力を抑制(CI向け) |
|
||||||
| `--provider <name>` | エージェントプロバイダーを上書き(claude\|codex\|mock) |
|
| `--provider <name>` | エージェントプロバイダーを上書き(claude\|codex\|mock) |
|
||||||
| `--model <name>` | エージェントモデルを上書き |
|
| `--model <name>` | エージェントモデルを上書き |
|
||||||
|
|
||||||
## ワークフロー
|
## ピース
|
||||||
|
|
||||||
TAKTはYAMLベースのワークフロー定義とルールベースルーティングを使用します。ビルトインワークフローはパッケージに埋め込まれており、`~/.takt/workflows/` のユーザーワークフローが優先されます。`takt eject` でビルトインを`~/.takt/`にコピーしてカスタマイズできます。
|
TAKTはYAMLベースのピース定義とルールベースルーティングを使用します。ビルトインピースはパッケージに埋め込まれており、`~/.takt/pieces/` のユーザーピースが優先されます。`takt eject` でビルトインを`~/.takt/`にコピーしてカスタマイズできます。
|
||||||
|
|
||||||
> **注記 (v0.4.0)**: ワークフローコンポーネントの内部用語が "step" から "movement" に変更されました。ユーザー向けのワークフローファイルは引き続き互換性がありますが、ワークフローをカスタマイズする場合、YAMLファイルで `movements:` の代わりに `movements:` が使用されることがあります。機能は同じです。
|
> **注記 (v0.4.0)**: ピースコンポーネントの内部用語が "step" から "movement" に変更されました。ユーザー向けのピースファイルは引き続き互換性がありますが、ピースをカスタマイズする場合、YAMLファイルで `movements:` の代わりに `movements:` が使用されることがあります。機能は同じです。
|
||||||
|
|
||||||
### ワークフローの例
|
### ピースの例
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
name: default
|
name: default
|
||||||
@ -368,22 +376,22 @@ movements:
|
|||||||
| AI判定 | `ai("条件テキスト")` | AIが条件をエージェント出力に対して評価 |
|
| AI判定 | `ai("条件テキスト")` | AIが条件をエージェント出力に対して評価 |
|
||||||
| 集約 | `all("X")` / `any("X")` | パラレルサブムーブメントの結果を集約 |
|
| 集約 | `all("X")` / `any("X")` | パラレルサブムーブメントの結果を集約 |
|
||||||
|
|
||||||
## ビルトインワークフロー
|
## ビルトインピース
|
||||||
|
|
||||||
TAKTには複数のビルトインワークフローが同梱されています:
|
TAKTには複数のビルトインピースが同梱されています:
|
||||||
|
|
||||||
| ワークフロー | 説明 |
|
| ピース | 説明 |
|
||||||
|------------|------|
|
|------------|------|
|
||||||
| `default` | フル開発ワークフロー: 計画 → アーキテクチャ設計 → 実装 → AI レビュー → 並列レビュー(アーキテクト+セキュリティ)→ スーパーバイザー承認。各レビュー段階に修正ループあり。 |
|
| `default` | フル開発ピース: 計画 → アーキテクチャ設計 → 実装 → AI レビュー → 並列レビュー(アーキテクト+セキュリティ)→ スーパーバイザー承認。各レビュー段階に修正ループあり。 |
|
||||||
| `minimal` | クイックワークフロー: 計画 → 実装 → レビュー → スーパーバイザー。高速イテレーション向けの最小構成。 |
|
| `minimal` | クイックピース: 計画 → 実装 → レビュー → スーパーバイザー。高速イテレーション向けの最小構成。 |
|
||||||
| `review-fix-minimal` | レビュー重視ワークフロー: レビュー → 修正 → スーパーバイザー。レビューフィードバックに基づく反復改善向け。 |
|
| `review-fix-minimal` | レビュー重視ピース: レビュー → 修正 → スーパーバイザー。レビューフィードバックに基づく反復改善向け。 |
|
||||||
| `research` | リサーチワークフロー: プランナー → ディガー → スーパーバイザー。質問せずに自律的にリサーチを実行。 |
|
| `research` | リサーチピース: プランナー → ディガー → スーパーバイザー。質問せずに自律的にリサーチを実行。 |
|
||||||
| `expert` | フルスタック開発ワークフロー: アーキテクチャ、フロントエンド、セキュリティ、QA レビューと修正ループ。 |
|
| `expert` | フルスタック開発ピース: アーキテクチャ、フロントエンド、セキュリティ、QA レビューと修正ループ。 |
|
||||||
| `expert-cqrs` | フルスタック開発ワークフロー(CQRS+ES特化): CQRS+ES、フロントエンド、セキュリティ、QA レビューと修正ループ。 |
|
| `expert-cqrs` | フルスタック開発ピース(CQRS+ES特化): CQRS+ES、フロントエンド、セキュリティ、QA レビューと修正ループ。 |
|
||||||
| `magi` | エヴァンゲリオンにインスパイアされた審議システム。3つの AI ペルソナ(MELCHIOR、BALTHASAR、CASPER)が分析し投票。 |
|
| `magi` | エヴァンゲリオンにインスパイアされた審議システム。3つの AI ペルソナ(MELCHIOR、BALTHASAR、CASPER)が分析し投票。 |
|
||||||
| `review-only` | 変更を加えない読み取り専用のコードレビューワークフロー。 |
|
| `review-only` | 変更を加えない読み取り専用のコードレビューピース。 |
|
||||||
|
|
||||||
`takt switch` でワークフローを切り替えられます。
|
`takt switch` でピースを切り替えられます。
|
||||||
|
|
||||||
## ビルトインエージェント
|
## ビルトインエージェント
|
||||||
|
|
||||||
@ -413,7 +421,7 @@ Markdown ファイルでエージェントプロンプトを作成:
|
|||||||
|
|
||||||
## モデル選択
|
## モデル選択
|
||||||
|
|
||||||
`model` フィールド(ワークフローのムーブメント、エージェント設定、グローバル設定)はプロバイダー(Claude Code CLI / Codex SDK)にそのまま渡されます。TAKTはモデルエイリアスの解決を行いません。
|
`model` フィールド(ピースのムーブメント、エージェント設定、グローバル設定)はプロバイダー(Claude Code CLI / Codex SDK)にそのまま渡されます。TAKTはモデルエイリアスの解決を行いません。
|
||||||
|
|
||||||
### Claude Code
|
### Claude Code
|
||||||
|
|
||||||
@ -427,14 +435,14 @@ Claude Code はエイリアス(`opus`、`sonnet`、`haiku`、`opusplan`、`def
|
|||||||
|
|
||||||
```
|
```
|
||||||
~/.takt/ # グローバル設定ディレクトリ
|
~/.takt/ # グローバル設定ディレクトリ
|
||||||
├── config.yaml # グローバル設定(プロバイダー、モデル、ワークフロー等)
|
├── config.yaml # グローバル設定(プロバイダー、モデル、ピース等)
|
||||||
├── workflows/ # ユーザーワークフロー定義(ビルトインを上書き)
|
├── pieces/ # ユーザーピース定義(ビルトインを上書き)
|
||||||
│ └── custom.yaml
|
│ └── custom.yaml
|
||||||
└── agents/ # ユーザーエージェントプロンプトファイル(.md)
|
└── agents/ # ユーザーエージェントプロンプトファイル(.md)
|
||||||
└── my-agent.md
|
└── my-agent.md
|
||||||
|
|
||||||
.takt/ # プロジェクトレベルの設定
|
.takt/ # プロジェクトレベルの設定
|
||||||
├── config.yaml # プロジェクト設定(現在のワークフロー等)
|
├── config.yaml # プロジェクト設定(現在のピース等)
|
||||||
├── tasks/ # 保留中のタスクファイル(.yaml, .md)
|
├── tasks/ # 保留中のタスクファイル(.yaml, .md)
|
||||||
├── completed/ # 完了したタスクとレポート
|
├── completed/ # 完了したタスクとレポート
|
||||||
├── reports/ # 実行レポート(自動生成)
|
├── reports/ # 実行レポート(自動生成)
|
||||||
@ -442,7 +450,7 @@ Claude Code はエイリアス(`opus`、`sonnet`、`haiku`、`opusplan`、`def
|
|||||||
└── logs/ # NDJSON 形式のセッションログ
|
└── logs/ # NDJSON 形式のセッションログ
|
||||||
├── latest.json # 現在/最新セッションへのポインタ
|
├── latest.json # 現在/最新セッションへのポインタ
|
||||||
├── previous.json # 前回セッションへのポインタ
|
├── previous.json # 前回セッションへのポインタ
|
||||||
└── {sessionId}.jsonl # ワークフロー実行ごとの NDJSON セッションログ
|
└── {sessionId}.jsonl # ピース実行ごとの NDJSON セッションログ
|
||||||
```
|
```
|
||||||
|
|
||||||
ビルトインリソースはnpmパッケージ(`dist/resources/`)に埋め込まれています。`~/.takt/` のユーザーファイルが優先されます。
|
ビルトインリソースはnpmパッケージ(`dist/resources/`)に埋め込まれています。`~/.takt/` のユーザーファイルが優先されます。
|
||||||
@ -454,7 +462,7 @@ Claude Code はエイリアス(`opus`、`sonnet`、`haiku`、`opusplan`、`def
|
|||||||
```yaml
|
```yaml
|
||||||
# ~/.takt/config.yaml
|
# ~/.takt/config.yaml
|
||||||
language: ja
|
language: ja
|
||||||
default_workflow: default
|
default_piece: default
|
||||||
log_level: info
|
log_level: info
|
||||||
provider: claude # デフォルトプロバイダー: claude または codex
|
provider: claude # デフォルトプロバイダー: claude または codex
|
||||||
model: sonnet # デフォルトモデル(オプション)
|
model: sonnet # デフォルトモデル(オプション)
|
||||||
@ -503,10 +511,10 @@ trusted_directories:
|
|||||||
| `{title}` | コミットメッセージ | Issueタイトル |
|
| `{title}` | コミットメッセージ | Issueタイトル |
|
||||||
| `{issue}` | コミットメッセージ、PR本文 | Issue番号 |
|
| `{issue}` | コミットメッセージ、PR本文 | Issue番号 |
|
||||||
| `{issue_body}` | PR本文 | Issue本文 |
|
| `{issue_body}` | PR本文 | Issue本文 |
|
||||||
| `{report}` | PR本文 | ワークフロー実行レポート |
|
| `{report}` | PR本文 | ピース実行レポート |
|
||||||
|
|
||||||
**モデル解決の優先順位:**
|
**モデル解決の優先順位:**
|
||||||
1. ワークフローのムーブメントの `model`(最優先)
|
1. ピースのムーブメントの `model`(最優先)
|
||||||
2. カスタムエージェントの `model`
|
2. カスタムエージェントの `model`
|
||||||
3. グローバル設定の `model`
|
3. グローバル設定の `model`
|
||||||
4. プロバイダーデフォルト(Claude: sonnet、Codex: codex)
|
4. プロバイダーデフォルト(Claude: sonnet、Codex: codex)
|
||||||
@ -517,14 +525,14 @@ trusted_directories:
|
|||||||
|
|
||||||
TAKT は `.takt/tasks/` 内のタスクファイルによるバッチ処理をサポートしています。`.yaml`/`.yml` と `.md` の両方のファイル形式に対応しています。
|
TAKT は `.takt/tasks/` 内のタスクファイルによるバッチ処理をサポートしています。`.yaml`/`.yml` と `.md` の両方のファイル形式に対応しています。
|
||||||
|
|
||||||
**YAML形式**(推奨、worktree/branch/workflowオプション対応):
|
**YAML形式**(推奨、worktree/branch/pieceオプション対応):
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
# .takt/tasks/add-auth.yaml
|
# .takt/tasks/add-auth.yaml
|
||||||
task: "認証機能を追加する"
|
task: "認証機能を追加する"
|
||||||
worktree: true # 隔離された共有クローンで実行
|
worktree: true # 隔離された共有クローンで実行
|
||||||
branch: "feat/add-auth" # ブランチ名(省略時は自動生成)
|
branch: "feat/add-auth" # ブランチ名(省略時は自動生成)
|
||||||
workflow: "default" # ワークフロー指定(省略時は現在のもの)
|
piece: "default" # ピース指定(省略時は現在のもの)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Markdown形式**(シンプル、後方互換):
|
**Markdown形式**(シンプル、後方互換):
|
||||||
@ -559,25 +567,25 @@ TAKTはセッションログをNDJSON(`.jsonl`)形式で`.takt/logs/`に書
|
|||||||
|
|
||||||
- `.takt/logs/latest.json` - 現在(または最新の)セッションへのポインタ
|
- `.takt/logs/latest.json` - 現在(または最新の)セッションへのポインタ
|
||||||
- `.takt/logs/previous.json` - 前回セッションへのポインタ
|
- `.takt/logs/previous.json` - 前回セッションへのポインタ
|
||||||
- `.takt/logs/{sessionId}.jsonl` - ワークフロー実行ごとのNDJSONセッションログ
|
- `.takt/logs/{sessionId}.jsonl` - ピース実行ごとのNDJSONセッションログ
|
||||||
|
|
||||||
レコード種別: `workflow_start`, `step_start`, `step_complete`, `workflow_complete`, `workflow_abort`
|
レコード種別: `piece_start`, `step_start`, `step_complete`, `piece_complete`, `piece_abort`
|
||||||
|
|
||||||
エージェントは`previous.json`を読み取って前回の実行コンテキストを引き継ぐことができます。セッション継続は自動的に行われます — `takt "タスク"`を実行するだけで前回のセッションから続行されます。
|
エージェントは`previous.json`を読み取って前回の実行コンテキストを引き継ぐことができます。セッション継続は自動的に行われます — `takt "タスク"`を実行するだけで前回のセッションから続行されます。
|
||||||
|
|
||||||
### カスタムワークフローの追加
|
### カスタムピースの追加
|
||||||
|
|
||||||
`~/.takt/workflows/` に YAML ファイルを追加するか、`takt eject` でビルトインをカスタマイズします:
|
`~/.takt/pieces/` に YAML ファイルを追加するか、`takt eject` でビルトインをカスタマイズします:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# defaultワークフローを~/.takt/workflows/にコピーして編集
|
# defaultピースを~/.takt/pieces/にコピーして編集
|
||||||
takt eject default
|
takt eject default
|
||||||
```
|
```
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
# ~/.takt/workflows/my-workflow.yaml
|
# ~/.takt/pieces/my-piece.yaml
|
||||||
name: my-workflow
|
name: my-piece
|
||||||
description: カスタムワークフロー
|
description: カスタムピース
|
||||||
max_iterations: 5
|
max_iterations: 5
|
||||||
initial_movement: analyze
|
initial_movement: analyze
|
||||||
|
|
||||||
@ -607,10 +615,10 @@ movements:
|
|||||||
|
|
||||||
### エージェントをパスで指定する
|
### エージェントをパスで指定する
|
||||||
|
|
||||||
ワークフロー定義ではファイルパスを使ってエージェントを指定します:
|
ピース定義ではファイルパスを使ってエージェントを指定します:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
# ワークフローファイルからの相対パス
|
# ピースファイルからの相対パス
|
||||||
agent: ../agents/default/coder.md
|
agent: ../agents/default/coder.md
|
||||||
|
|
||||||
# ホームディレクトリ
|
# ホームディレクトリ
|
||||||
@ -620,24 +628,24 @@ agent: ~/.takt/agents/default/coder.md
|
|||||||
agent: /path/to/custom/agent.md
|
agent: /path/to/custom/agent.md
|
||||||
```
|
```
|
||||||
|
|
||||||
### ワークフロー変数
|
### ピース変数
|
||||||
|
|
||||||
`instruction_template`で使用可能な変数:
|
`instruction_template`で使用可能な変数:
|
||||||
|
|
||||||
| 変数 | 説明 |
|
| 変数 | 説明 |
|
||||||
|------|------|
|
|------|------|
|
||||||
| `{task}` | 元のユーザーリクエスト(テンプレートになければ自動注入) |
|
| `{task}` | 元のユーザーリクエスト(テンプレートになければ自動注入) |
|
||||||
| `{iteration}` | ワークフロー全体のターン数(実行された全ムーブメント数) |
|
| `{iteration}` | ピース全体のターン数(実行された全ムーブメント数) |
|
||||||
| `{max_iterations}` | 最大イテレーション数 |
|
| `{max_iterations}` | 最大イテレーション数 |
|
||||||
| `{movement_iteration}` | ムーブメントごとのイテレーション数(このムーブメントが実行された回数) |
|
| `{movement_iteration}` | ムーブメントごとのイテレーション数(このムーブメントが実行された回数) |
|
||||||
| `{previous_response}` | 前のムーブメントの出力(テンプレートになければ自動注入) |
|
| `{previous_response}` | 前のムーブメントの出力(テンプレートになければ自動注入) |
|
||||||
| `{user_inputs}` | ワークフロー中の追加ユーザー入力(テンプレートになければ自動注入) |
|
| `{user_inputs}` | ピース中の追加ユーザー入力(テンプレートになければ自動注入) |
|
||||||
| `{report_dir}` | レポートディレクトリパス(例: `.takt/reports/20250126-143052-task-summary`) |
|
| `{report_dir}` | レポートディレクトリパス(例: `.takt/reports/20250126-143052-task-summary`) |
|
||||||
| `{report:filename}` | `{report_dir}/filename` に展開(例: `{report:00-plan.md}`) |
|
| `{report:filename}` | `{report_dir}/filename` に展開(例: `{report:00-plan.md}`) |
|
||||||
|
|
||||||
### ワークフローの設計
|
### ピースの設計
|
||||||
|
|
||||||
各ワークフローのムーブメントに必要な要素:
|
各ピースのムーブメントに必要な要素:
|
||||||
|
|
||||||
**1. エージェント** - システムプロンプトを含むMarkdownファイル:
|
**1. エージェント** - システムプロンプトを含むMarkdownファイル:
|
||||||
|
|
||||||
@ -673,13 +681,13 @@ rules:
|
|||||||
## API使用例
|
## API使用例
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
import { WorkflowEngine, loadWorkflow } from 'takt'; // npm install takt
|
import { PieceEngine, loadPiece } from 'takt'; // npm install takt
|
||||||
|
|
||||||
const config = loadWorkflow('default');
|
const config = loadPiece('default');
|
||||||
if (!config) {
|
if (!config) {
|
||||||
throw new Error('Workflow not found');
|
throw new Error('Piece not found');
|
||||||
}
|
}
|
||||||
const engine = new WorkflowEngine(config, process.cwd(), 'My task');
|
const engine = new PieceEngine(config, process.cwd(), 'My task');
|
||||||
|
|
||||||
engine.on('step:complete', (step, response) => {
|
engine.on('step:complete', (step, response) => {
|
||||||
console.log(`${step.name}: ${response.status}`);
|
console.log(`${step.name}: ${response.status}`);
|
||||||
@ -698,7 +706,7 @@ await engine.run();
|
|||||||
|
|
||||||
TAKTはPRレビューやタスク実行を自動化するGitHub Actionを提供しています。詳細は [takt-action](https://github.com/nrslib/takt-action) を参照してください。
|
TAKTはPRレビューやタスク実行を自動化するGitHub Actionを提供しています。詳細は [takt-action](https://github.com/nrslib/takt-action) を参照してください。
|
||||||
|
|
||||||
**ワークフロー例** (このリポジトリの [.github/workflows/takt-action.yml](../.github/workflows/takt-action.yml) を参照):
|
**ピース例** (このリポジトリの [.github/workflows/takt-action.yml](../.github/workflows/takt-action.yml) を参照):
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
name: TAKT
|
name: TAKT
|
||||||
@ -716,7 +724,7 @@ jobs:
|
|||||||
issues: write
|
issues: write
|
||||||
pull-requests: write
|
pull-requests: write
|
||||||
|
|
||||||
movements:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
@ -753,7 +761,7 @@ export TAKT_OPENAI_API_KEY=sk-...
|
|||||||
|
|
||||||
## ドキュメント
|
## ドキュメント
|
||||||
|
|
||||||
- [Workflow Guide](./workflows.md) - ワークフローの作成とカスタマイズ
|
- [Piece Guide](./pieces.md) - ピースの作成とカスタマイズ
|
||||||
- [Agent Guide](./agents.md) - カスタムエージェントの設定
|
- [Agent Guide](./agents.md) - カスタムエージェントの設定
|
||||||
- [Changelog](../CHANGELOG.md) - バージョン履歴
|
- [Changelog](../CHANGELOG.md) - バージョン履歴
|
||||||
- [Security Policy](../SECURITY.md) - 脆弱性報告
|
- [Security Policy](../SECURITY.md) - 脆弱性報告
|
||||||
|
|||||||
@ -17,10 +17,10 @@ TAKT includes six built-in agents (located in `resources/global/{lang}/agents/de
|
|||||||
|
|
||||||
## Specifying Agents
|
## Specifying Agents
|
||||||
|
|
||||||
In workflow YAML, agents are specified by file path:
|
In piece YAML, agents are specified by file path:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
# Relative to workflow file directory
|
# Relative to piece file directory
|
||||||
agent: ../agents/default/coder.md
|
agent: ../agents/default/coder.md
|
||||||
|
|
||||||
# Home directory
|
# Home directory
|
||||||
@ -52,7 +52,7 @@ You are a security-focused code reviewer.
|
|||||||
- Verify proper error handling
|
- Verify proper error handling
|
||||||
```
|
```
|
||||||
|
|
||||||
> **Note**: Agents do NOT need to output status markers manually. The workflow engine auto-injects status output rules into agent instructions based on the step's `rules` configuration. Agents output `[STEP:N]` tags (where N is the 0-based rule index) which the engine uses for routing.
|
> **Note**: Agents do NOT need to output status markers manually. The piece engine auto-injects status output rules into agent instructions based on the step's `rules` configuration. Agents output `[STEP:N]` tags (where N is the 0-based rule index) which the engine uses for routing.
|
||||||
|
|
||||||
### Using agents.yaml
|
### Using agents.yaml
|
||||||
|
|
||||||
@ -74,7 +74,7 @@ agents:
|
|||||||
|
|
||||||
| Field | Description |
|
| Field | Description |
|
||||||
|-------|-------------|
|
|-------|-------------|
|
||||||
| `name` | Agent identifier (referenced in workflow steps) |
|
| `name` | Agent identifier (referenced in piece steps) |
|
||||||
| `prompt_file` | Path to Markdown prompt file |
|
| `prompt_file` | Path to Markdown prompt file |
|
||||||
| `prompt` | Inline prompt text (alternative to `prompt_file`) |
|
| `prompt` | Inline prompt text (alternative to `prompt_file`) |
|
||||||
| `allowed_tools` | List of tools the agent can use |
|
| `allowed_tools` | List of tools the agent can use |
|
||||||
@ -113,7 +113,7 @@ agents:
|
|||||||
```
|
```
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
# workflow.yaml
|
# piece.yaml
|
||||||
steps:
|
steps:
|
||||||
- name: implement
|
- name: implement
|
||||||
agent: ../agents/default/coder.md
|
agent: ../agents/default/coder.md
|
||||||
|
|||||||
@ -4,14 +4,14 @@
|
|||||||
|
|
||||||
## 目次
|
## 目次
|
||||||
|
|
||||||
1. [シーケンス図: インタラクティブモードからワークフロー実行まで](#シーケンス図-インタラクティブモードからワークフロー実行まで)
|
1. [シーケンス図: インタラクティブモードからピース実行まで](#シーケンス図-インタラクティブモードからピース実行まで)
|
||||||
2. [フローチャート: 3フェーズステップ実行](#フローチャート-3フェーズステップ実行)
|
2. [フローチャート: 3フェーズステップ実行](#フローチャート-3フェーズステップ実行)
|
||||||
3. [フローチャート: ルール評価の5段階フォールバック](#フローチャート-ルール評価の5段階フォールバック)
|
3. [フローチャート: ルール評価の5段階フォールバック](#フローチャート-ルール評価の5段階フォールバック)
|
||||||
4. [ステートマシン図: WorkflowEngineのステップ遷移](#ステートマシン図-workflowengineのステップ遷移)
|
4. [ステートマシン図: PieceEngineのステップ遷移](#ステートマシン図-pieceengineのステップ遷移)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## シーケンス図: インタラクティブモードからワークフロー実行まで
|
## シーケンス図: インタラクティブモードからピース実行まで
|
||||||
|
|
||||||
```mermaid
|
```mermaid
|
||||||
sequenceDiagram
|
sequenceDiagram
|
||||||
@ -20,8 +20,8 @@ sequenceDiagram
|
|||||||
participant Interactive as Interactive Layer
|
participant Interactive as Interactive Layer
|
||||||
participant Orchestration as Execution Orchestration
|
participant Orchestration as Execution Orchestration
|
||||||
participant TaskExec as Task Execution
|
participant TaskExec as Task Execution
|
||||||
participant WorkflowExec as Workflow Execution
|
participant PieceExec as Piece Execution
|
||||||
participant Engine as WorkflowEngine
|
participant Engine as PieceEngine
|
||||||
participant StepExec as StepExecutor
|
participant StepExec as StepExecutor
|
||||||
participant Provider as Provider Layer
|
participant Provider as Provider Layer
|
||||||
|
|
||||||
@ -42,8 +42,8 @@ sequenceDiagram
|
|||||||
|
|
||||||
CLI->>Orchestration: selectAndExecuteTask(cwd, task)
|
CLI->>Orchestration: selectAndExecuteTask(cwd, task)
|
||||||
|
|
||||||
Orchestration->>Orchestration: determineWorkflow()
|
Orchestration->>Orchestration: determinePiece()
|
||||||
Note over Orchestration: ワークフロー選択<br/>(interactive or override)
|
Note over Orchestration: ピース選択<br/>(interactive or override)
|
||||||
|
|
||||||
Orchestration->>Orchestration: confirmAndCreateWorktree()
|
Orchestration->>Orchestration: confirmAndCreateWorktree()
|
||||||
Orchestration->>Provider: summarizeTaskName(task)
|
Orchestration->>Provider: summarizeTaskName(task)
|
||||||
@ -51,17 +51,17 @@ sequenceDiagram
|
|||||||
Orchestration->>Orchestration: createSharedClone()
|
Orchestration->>Orchestration: createSharedClone()
|
||||||
|
|
||||||
Orchestration->>TaskExec: executeTask(options)
|
Orchestration->>TaskExec: executeTask(options)
|
||||||
TaskExec->>TaskExec: loadWorkflowByIdentifier()
|
TaskExec->>TaskExec: loadPieceByIdentifier()
|
||||||
TaskExec->>WorkflowExec: executeWorkflow(config, task, cwd)
|
TaskExec->>PieceExec: executePiece(config, task, cwd)
|
||||||
|
|
||||||
WorkflowExec->>WorkflowExec: セッション管理初期化
|
PieceExec->>PieceExec: セッション管理初期化
|
||||||
Note over WorkflowExec: loadAgentSessions()<br/>generateSessionId()<br/>initNdjsonLog()
|
Note over PieceExec: loadAgentSessions()<br/>generateSessionId()<br/>initNdjsonLog()
|
||||||
|
|
||||||
WorkflowExec->>Engine: new WorkflowEngine(config, cwd, task, options)
|
PieceExec->>Engine: new PieceEngine(config, cwd, task, options)
|
||||||
WorkflowExec->>Engine: イベント購読 (step:start, step:complete, etc.)
|
PieceExec->>Engine: イベント購読 (step:start, step:complete, etc.)
|
||||||
WorkflowExec->>Engine: engine.run()
|
PieceExec->>Engine: engine.run()
|
||||||
|
|
||||||
loop ワークフローステップ
|
loop ピースステップ
|
||||||
Engine->>StepExec: runStep(step)
|
Engine->>StepExec: runStep(step)
|
||||||
|
|
||||||
StepExec->>StepExec: InstructionBuilder.build()
|
StepExec->>StepExec: InstructionBuilder.build()
|
||||||
@ -89,15 +89,15 @@ sequenceDiagram
|
|||||||
Engine->>Engine: resolveNextStep()
|
Engine->>Engine: resolveNextStep()
|
||||||
|
|
||||||
alt nextStep === COMPLETE
|
alt nextStep === COMPLETE
|
||||||
Engine-->>WorkflowExec: ワークフロー完了
|
Engine-->>PieceExec: ピース完了
|
||||||
else nextStep === ABORT
|
else nextStep === ABORT
|
||||||
Engine-->>WorkflowExec: ワークフロー中断
|
Engine-->>PieceExec: ピース中断
|
||||||
else 通常ステップ
|
else 通常ステップ
|
||||||
Engine->>Engine: state.currentStep = nextStep
|
Engine->>Engine: state.currentStep = nextStep
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
WorkflowExec-->>TaskExec: { success: boolean }
|
PieceExec-->>TaskExec: { success: boolean }
|
||||||
TaskExec-->>Orchestration: taskSuccess
|
TaskExec-->>Orchestration: taskSuccess
|
||||||
|
|
||||||
opt taskSuccess && isWorktree
|
opt taskSuccess && isWorktree
|
||||||
@ -247,11 +247,11 @@ flowchart TD
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## ステートマシン図: WorkflowEngineのステップ遷移
|
## ステートマシン図: PieceEngineのステップ遷移
|
||||||
|
|
||||||
```mermaid
|
```mermaid
|
||||||
stateDiagram-v2
|
stateDiagram-v2
|
||||||
[*] --> Initializing: new WorkflowEngine
|
[*] --> Initializing: new PieceEngine
|
||||||
|
|
||||||
Initializing --> Running: engine.run()
|
Initializing --> Running: engine.run()
|
||||||
note right of Initializing
|
note right of Initializing
|
||||||
@ -312,20 +312,20 @@ stateDiagram-v2
|
|||||||
Transition --> CheckAbort: state.currentStep = nextStep
|
Transition --> CheckAbort: state.currentStep = nextStep
|
||||||
}
|
}
|
||||||
|
|
||||||
Running --> Completed: workflow:complete
|
Running --> Completed: piece:complete
|
||||||
Running --> Aborted: workflow:abort
|
Running --> Aborted: piece:abort
|
||||||
|
|
||||||
Completed --> [*]: return state
|
Completed --> [*]: return state
|
||||||
Aborted --> [*]: return state
|
Aborted --> [*]: return state
|
||||||
|
|
||||||
note right of Completed
|
note right of Completed
|
||||||
state.status = 'completed'
|
state.status = 'completed'
|
||||||
emit workflow:complete
|
emit piece:complete
|
||||||
end note
|
end note
|
||||||
|
|
||||||
note right of Aborted
|
note right of Aborted
|
||||||
state.status = 'aborted'
|
state.status = 'aborted'
|
||||||
emit workflow:abort
|
emit piece:abort
|
||||||
原因:
|
原因:
|
||||||
- User abort (Ctrl+C)
|
- User abort (Ctrl+C)
|
||||||
- Iteration limit
|
- Iteration limit
|
||||||
@ -356,23 +356,23 @@ flowchart LR
|
|||||||
end
|
end
|
||||||
|
|
||||||
subgraph Transform2 ["変換2: 環境準備"]
|
subgraph Transform2 ["変換2: 環境準備"]
|
||||||
D1[determineWorkflow]
|
D1[determinePiece]
|
||||||
D2[summarizeTaskName<br/>AI呼び出し]
|
D2[summarizeTaskName<br/>AI呼び出し]
|
||||||
D3[createSharedClone]
|
D3[createSharedClone]
|
||||||
end
|
end
|
||||||
|
|
||||||
subgraph Execution ["実行環境"]
|
subgraph Execution ["実行環境"]
|
||||||
E1[workflowIdentifier]
|
E1[pieceIdentifier]
|
||||||
E2[execCwd, branch]
|
E2[execCwd, branch]
|
||||||
end
|
end
|
||||||
|
|
||||||
subgraph Transform3 ["変換3: 設定読み込み"]
|
subgraph Transform3 ["変換3: 設定読み込み"]
|
||||||
F1[loadWorkflowByIdentifier]
|
F1[loadPieceByIdentifier]
|
||||||
F2[loadAgentSessions]
|
F2[loadAgentSessions]
|
||||||
end
|
end
|
||||||
|
|
||||||
subgraph Config ["設定"]
|
subgraph Config ["設定"]
|
||||||
G1[WorkflowConfig]
|
G1[PieceConfig]
|
||||||
G2[initialSessions]
|
G2[initialSessions]
|
||||||
end
|
end
|
||||||
|
|
||||||
@ -381,7 +381,7 @@ flowchart LR
|
|||||||
end
|
end
|
||||||
|
|
||||||
subgraph State ["実行状態"]
|
subgraph State ["実行状態"]
|
||||||
I[WorkflowState]
|
I[PieceState]
|
||||||
end
|
end
|
||||||
|
|
||||||
subgraph Transform5 ["変換5: インストラクション"]
|
subgraph Transform5 ["変換5: インストラクション"]
|
||||||
@ -555,7 +555,7 @@ flowchart TB
|
|||||||
1. **シーケンス図**: 時系列での各レイヤー間のやりとり
|
1. **シーケンス図**: 時系列での各レイヤー間のやりとり
|
||||||
2. **3フェーズフローチャート**: ステップ実行の詳細な処理フロー
|
2. **3フェーズフローチャート**: ステップ実行の詳細な処理フロー
|
||||||
3. **ルール評価フローチャート**: 5段階フォールバックの意思決定ロジック
|
3. **ルール評価フローチャート**: 5段階フォールバックの意思決定ロジック
|
||||||
4. **ステートマシン**: WorkflowEngineの状態遷移
|
4. **ステートマシン**: PieceEngineの状態遷移
|
||||||
5. **データ変換図**: 各段階でのデータ形式変換
|
5. **データ変換図**: 各段階でのデータ形式変換
|
||||||
6. **コンテキスト蓄積図**: 実行が進むにつれてコンテキストが蓄積される様子
|
6. **コンテキスト蓄積図**: 実行が進むにつれてコンテキストが蓄積される様子
|
||||||
|
|
||||||
|
|||||||
@ -1,6 +1,6 @@
|
|||||||
# TAKTデータフロー解析
|
# TAKTデータフロー解析
|
||||||
|
|
||||||
このドキュメントでは、TAKTにおけるデータフロー、特にインタラクティブモードからワークフロー実行に至るまでのデータの流れを説明します。
|
このドキュメントでは、TAKTにおけるデータフロー、特にインタラクティブモードからピース実行に至るまでのデータの流れを説明します。
|
||||||
|
|
||||||
## 目次
|
## 目次
|
||||||
|
|
||||||
@ -18,8 +18,8 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
|||||||
|
|
||||||
1. **CLI Layer** - ユーザー入力の受付
|
1. **CLI Layer** - ユーザー入力の受付
|
||||||
2. **Interactive Layer** - タスクの対話的な明確化
|
2. **Interactive Layer** - タスクの対話的な明確化
|
||||||
3. **Execution Orchestration Layer** - ワークフロー選択とworktree管理
|
3. **Execution Orchestration Layer** - ピース選択とworktree管理
|
||||||
4. **Workflow Execution Layer** - セッション管理とイベント処理
|
4. **Piece Execution Layer** - セッション管理とイベント処理
|
||||||
5. **Engine Layer** - ステートマシンによるステップ実行
|
5. **Engine Layer** - ステートマシンによるステップ実行
|
||||||
6. **Instruction Building Layer** - プロンプト生成
|
6. **Instruction Building Layer** - プロンプト生成
|
||||||
7. **Provider Layer** - AIプロバイダーとの通信
|
7. **Provider Layer** - AIプロバイダーとの通信
|
||||||
@ -67,9 +67,9 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
|||||||
│ (selectAndExecute.ts) │
|
│ (selectAndExecute.ts) │
|
||||||
│ │
|
│ │
|
||||||
│ ┌──────────────────────┐ │
|
│ ┌──────────────────────┐ │
|
||||||
│ │ determineWorkflow() │ ← workflow選択 (interactive/override) │
|
│ │ determinePiece() │ ← piece選択 (interactive/override) │
|
||||||
│ └─────────┬────────────┘ │
|
│ └─────────┬────────────┘ │
|
||||||
│ │ workflowIdentifier: string │
|
│ │ pieceIdentifier: string │
|
||||||
│ ▼ │
|
│ ▼ │
|
||||||
│ ┌──────────────────────────────────┐ │
|
│ ┌──────────────────────────────────┐ │
|
||||||
│ │ confirmAndCreateWorktree() │ │
|
│ │ confirmAndCreateWorktree() │ │
|
||||||
@ -82,19 +82,19 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
|||||||
│ │ executeTask() │ │
|
│ │ executeTask() │ │
|
||||||
│ │ - task: string │ │
|
│ │ - task: string │ │
|
||||||
│ │ - cwd: string (実行ディレクトリ) │ │
|
│ │ - cwd: string (実行ディレクトリ) │ │
|
||||||
│ │ - workflowIdentifier: string │ │
|
│ │ - pieceIdentifier: string │ │
|
||||||
│ │ - projectCwd: string (.takt/在処) │ │
|
│ │ - projectCwd: string (.takt/在処) │ │
|
||||||
│ └─────────┬────────────────────────┘ │
|
│ └─────────┬────────────────────────┘ │
|
||||||
└────────────┼────────────────────────────────────────────────────┘
|
└────────────┼────────────────────────────────────────────────────┘
|
||||||
│
|
│
|
||||||
▼
|
▼
|
||||||
┌─────────────────────────────────────────────────────────────────┐
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
│ 4. Workflow Execution Layer │
|
│ 4. Piece Execution Layer │
|
||||||
│ (workflowExecution.ts, taskExecution.ts) │
|
│ (pieceExecution.ts, taskExecution.ts) │
|
||||||
│ │
|
│ │
|
||||||
│ ┌────────────────────────────────┐ │
|
│ ┌────────────────────────────────┐ │
|
||||||
│ │ loadWorkflowByIdentifier() │ │
|
│ │ loadPieceByIdentifier() │ │
|
||||||
│ │ → WorkflowConfig │ │
|
│ │ → PieceConfig │ │
|
||||||
│ └────────┬───────────────────────┘ │
|
│ └────────┬───────────────────────┘ │
|
||||||
│ │ │
|
│ │ │
|
||||||
│ ▼ │
|
│ ▼ │
|
||||||
@ -108,10 +108,10 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
|||||||
│ │ │
|
│ │ │
|
||||||
│ ▼ │
|
│ ▼ │
|
||||||
│ ┌────────────────────────────────┐ │
|
│ ┌────────────────────────────────┐ │
|
||||||
│ │ WorkflowEngine initialization │ │
|
│ │ PieceEngine initialization │ │
|
||||||
│ │ │ │
|
│ │ │ │
|
||||||
│ │ new WorkflowEngine( │ │
|
│ │ new PieceEngine( │ │
|
||||||
│ │ config: WorkflowConfig, │ │
|
│ │ config: PieceConfig, │ │
|
||||||
│ │ cwd: string, │ │
|
│ │ cwd: string, │ │
|
||||||
│ │ task: string, │ │
|
│ │ task: string, │ │
|
||||||
│ │ options: { │ │
|
│ │ options: { │ │
|
||||||
@ -132,8 +132,8 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
|||||||
│ │ - step:start │ │
|
│ │ - step:start │ │
|
||||||
│ │ - step:complete │ │
|
│ │ - step:complete │ │
|
||||||
│ │ - step:report │ │
|
│ │ - step:report │ │
|
||||||
│ │ - workflow:complete │ │
|
│ │ - piece:complete │ │
|
||||||
│ │ - workflow:abort │ │
|
│ │ - piece:abort │ │
|
||||||
│ └────────┬───────────────────────┘ │
|
│ └────────┬───────────────────────┘ │
|
||||||
│ │ │
|
│ │ │
|
||||||
│ ▼ │
|
│ ▼ │
|
||||||
@ -144,7 +144,7 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
|||||||
│
|
│
|
||||||
▼
|
▼
|
||||||
┌─────────────────────────────────────────────────────────────────┐
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
│ 5. Engine Layer (WorkflowEngine.ts) │
|
│ 5. Engine Layer (PieceEngine.ts) │
|
||||||
│ │
|
│ │
|
||||||
│ ┌────────────────────────────────────────┐ │
|
│ ┌────────────────────────────────────────┐ │
|
||||||
│ │ State Machine Loop │ │
|
│ │ State Machine Loop │ │
|
||||||
@ -222,7 +222,7 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
|||||||
│ │ │ │ ││
|
│ │ │ │ ││
|
||||||
│ │ │ InstructionBuilder.build() │ ││
|
│ │ │ InstructionBuilder.build() │ ││
|
||||||
│ │ │ ├─ Execution Context (cwd, permission) │ ││
|
│ │ │ ├─ Execution Context (cwd, permission) │ ││
|
||||||
│ │ │ ├─ Workflow Context (iteration, step, report) │ ││
|
│ │ │ ├─ Piece Context (iteration, step, report) │ ││
|
||||||
│ │ │ ├─ User Request ({task}) │ ││
|
│ │ │ ├─ User Request ({task}) │ ││
|
||||||
│ │ │ ├─ Previous Response ({previous_response}) │ ││
|
│ │ │ ├─ Previous Response ({previous_response}) │ ││
|
||||||
│ │ │ ├─ Additional User Inputs ({user_inputs}) │ ││
|
│ │ │ ├─ Additional User Inputs ({user_inputs}) │ ││
|
||||||
@ -317,11 +317,11 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
|||||||
- パイプラインモード vs 通常モードの判定
|
- パイプラインモード vs 通常モードの判定
|
||||||
|
|
||||||
**データ入力**:
|
**データ入力**:
|
||||||
- CLI引数: `task`, `--workflow`, `--issue`, など
|
- CLI引数: `task`, `--piece`, `--issue`, など
|
||||||
|
|
||||||
**データ出力**:
|
**データ出力**:
|
||||||
- `task: string` (タスク記述)
|
- `task: string` (タスク記述)
|
||||||
- `workflow: string | undefined` (ワークフロー名またはパス)
|
- `piece: string | undefined` (ピース名またはパス)
|
||||||
- `createWorktree: boolean | undefined`
|
- `createWorktree: boolean | undefined`
|
||||||
- その他オプション
|
- その他オプション
|
||||||
|
|
||||||
@ -362,15 +362,15 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
|||||||
|
|
||||||
### 3. Execution Orchestration Layer (`src/features/tasks/execute/selectAndExecute.ts`)
|
### 3. Execution Orchestration Layer (`src/features/tasks/execute/selectAndExecute.ts`)
|
||||||
|
|
||||||
**役割**: ワークフロー選択とworktree管理
|
**役割**: ピース選択とworktree管理
|
||||||
|
|
||||||
**主要な処理**:
|
**主要な処理**:
|
||||||
|
|
||||||
1. **ワークフロー決定** (`determineWorkflow()`):
|
1. **ピース決定** (`determinePiece()`):
|
||||||
- オーバーライド指定がある場合:
|
- オーバーライド指定がある場合:
|
||||||
- パス形式 → そのまま使用
|
- パス形式 → そのまま使用
|
||||||
- 名前形式 → バリデーション
|
- 名前形式 → バリデーション
|
||||||
- オーバーライドなし → インタラクティブ選択 (`selectWorkflow()`)
|
- オーバーライドなし → インタラクティブ選択 (`selectPiece()`)
|
||||||
|
|
||||||
2. **Worktree作成** (`confirmAndCreateWorktree()`):
|
2. **Worktree作成** (`confirmAndCreateWorktree()`):
|
||||||
- ユーザー確認 (または `--create-worktree` フラグ)
|
- ユーザー確認 (または `--create-worktree` フラグ)
|
||||||
@ -385,7 +385,7 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
|||||||
**データ入力**:
|
**データ入力**:
|
||||||
- `task: string`
|
- `task: string`
|
||||||
- `options?: SelectAndExecuteOptions`:
|
- `options?: SelectAndExecuteOptions`:
|
||||||
- `workflow?: string`
|
- `piece?: string`
|
||||||
- `createWorktree?: boolean`
|
- `createWorktree?: boolean`
|
||||||
- `autoPr?: boolean`
|
- `autoPr?: boolean`
|
||||||
- `agentOverrides?: TaskExecutionOptions`
|
- `agentOverrides?: TaskExecutionOptions`
|
||||||
@ -396,35 +396,35 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### 4. Workflow Execution Layer
|
### 4. Piece Execution Layer
|
||||||
|
|
||||||
#### 4.1 Task Execution (`src/features/tasks/execute/taskExecution.ts`)
|
#### 4.1 Task Execution (`src/features/tasks/execute/taskExecution.ts`)
|
||||||
|
|
||||||
**役割**: ワークフロー読み込みと実行の橋渡し
|
**役割**: ピース読み込みと実行の橋渡し
|
||||||
|
|
||||||
**主要な処理**:
|
**主要な処理**:
|
||||||
1. `loadWorkflowByIdentifier()`: YAMLまたは名前からワークフロー設定を読み込み
|
1. `loadPieceByIdentifier()`: YAMLまたは名前からピース設定を読み込み
|
||||||
2. `executeWorkflow()` を呼び出し
|
2. `executePiece()` を呼び出し
|
||||||
|
|
||||||
**データ入力**:
|
**データ入力**:
|
||||||
- `ExecuteTaskOptions`:
|
- `ExecuteTaskOptions`:
|
||||||
- `task: string`
|
- `task: string`
|
||||||
- `cwd: string` (実行ディレクトリ、cloneまたはプロジェクトルート)
|
- `cwd: string` (実行ディレクトリ、cloneまたはプロジェクトルート)
|
||||||
- `workflowIdentifier: string`
|
- `pieceIdentifier: string`
|
||||||
- `projectCwd: string` (`.takt/`がある場所)
|
- `projectCwd: string` (`.takt/`がある場所)
|
||||||
- `agentOverrides?: TaskExecutionOptions`
|
- `agentOverrides?: TaskExecutionOptions`
|
||||||
|
|
||||||
**データ出力**:
|
**データ出力**:
|
||||||
- `boolean` (成功/失敗)
|
- `boolean` (成功/失敗)
|
||||||
|
|
||||||
#### 4.2 Workflow Execution (`src/features/tasks/execute/workflowExecution.ts`)
|
#### 4.2 Piece Execution (`src/features/tasks/execute/pieceExecution.ts`)
|
||||||
|
|
||||||
**役割**: セッション管理、イベント購読、ログ記録
|
**役割**: セッション管理、イベント購読、ログ記録
|
||||||
|
|
||||||
**主要な処理**:
|
**主要な処理**:
|
||||||
|
|
||||||
1. **セッション管理**:
|
1. **セッション管理**:
|
||||||
- `generateSessionId()`: ワークフローセッションID生成
|
- `generateSessionId()`: ピースセッションID生成
|
||||||
- `loadAgentSessions()` / `loadWorktreeSessions()`: エージェントセッション復元
|
- `loadAgentSessions()` / `loadWorktreeSessions()`: エージェントセッション復元
|
||||||
- `updateAgentSession()` / `updateWorktreeSession()`: セッション保存
|
- `updateAgentSession()` / `updateWorktreeSession()`: セッション保存
|
||||||
|
|
||||||
@ -433,9 +433,9 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
|||||||
- `initNdjsonLog()`: NDJSON形式のログファイル初期化
|
- `initNdjsonLog()`: NDJSON形式のログファイル初期化
|
||||||
- `updateLatestPointer()`: `latest.json` ポインタ更新
|
- `updateLatestPointer()`: `latest.json` ポインタ更新
|
||||||
|
|
||||||
3. **WorkflowEngine初期化**:
|
3. **PieceEngine初期化**:
|
||||||
```typescript
|
```typescript
|
||||||
new WorkflowEngine(workflowConfig, cwd, task, {
|
new PieceEngine(pieceConfig, cwd, task, {
|
||||||
onStream: streamHandler, // UI表示用ストリームハンドラ
|
onStream: streamHandler, // UI表示用ストリームハンドラ
|
||||||
initialSessions: savedSessions, // 保存済みセッションID
|
initialSessions: savedSessions, // 保存済みセッションID
|
||||||
onSessionUpdate: sessionUpdateHandler,
|
onSessionUpdate: sessionUpdateHandler,
|
||||||
@ -451,36 +451,36 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
|||||||
- `step:start`: ステップ開始 → UI表示、NDJSON記録
|
- `step:start`: ステップ開始 → UI表示、NDJSON記録
|
||||||
- `step:complete`: ステップ完了 → UI表示、NDJSON記録、セッション更新
|
- `step:complete`: ステップ完了 → UI表示、NDJSON記録、セッション更新
|
||||||
- `step:report`: レポートファイル出力
|
- `step:report`: レポートファイル出力
|
||||||
- `workflow:complete`: ワークフロー完了 → 通知
|
- `piece:complete`: ピース完了 → 通知
|
||||||
- `workflow:abort`: ワークフロー中断 → エラー通知
|
- `piece:abort`: ピース中断 → エラー通知
|
||||||
|
|
||||||
5. **SIGINT処理**:
|
5. **SIGINT処理**:
|
||||||
- 1回目: Graceful abort (`engine.abort()`)
|
- 1回目: Graceful abort (`engine.abort()`)
|
||||||
- 2回目: 強制終了
|
- 2回目: 強制終了
|
||||||
|
|
||||||
**データ入力**:
|
**データ入力**:
|
||||||
- `WorkflowConfig`
|
- `PieceConfig`
|
||||||
- `task: string`
|
- `task: string`
|
||||||
- `cwd: string`
|
- `cwd: string`
|
||||||
- `WorkflowExecutionOptions`
|
- `PieceExecutionOptions`
|
||||||
|
|
||||||
**データ出力**:
|
**データ出力**:
|
||||||
- `WorkflowExecutionResult`:
|
- `PieceExecutionResult`:
|
||||||
- `success: boolean`
|
- `success: boolean`
|
||||||
- `reason?: string`
|
- `reason?: string`
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### 5. Engine Layer (`src/core/workflow/engine/WorkflowEngine.ts`)
|
### 5. Engine Layer (`src/core/piece/engine/PieceEngine.ts`)
|
||||||
|
|
||||||
**役割**: ステートマシンによるワークフロー実行制御
|
**役割**: ステートマシンによるピース実行制御
|
||||||
|
|
||||||
**主要な構成要素**:
|
**主要な構成要素**:
|
||||||
|
|
||||||
1. **State管理** (`WorkflowState`):
|
1. **State管理** (`PieceState`):
|
||||||
- `status`: 'running' | 'completed' | 'aborted'
|
- `status`: 'running' | 'completed' | 'aborted'
|
||||||
- `currentStep`: 現在実行中のステップ名
|
- `currentStep`: 現在実行中のステップ名
|
||||||
- `iteration`: ワークフロー全体のイテレーション数
|
- `iteration`: ピース全体のイテレーション数
|
||||||
- `stepIterations`: Map<stepName, count> (ステップごとの実行回数)
|
- `stepIterations`: Map<stepName, count> (ステップごとの実行回数)
|
||||||
- `agentSessions`: Map<agent, sessionId> (エージェントごとのセッションID)
|
- `agentSessions`: Map<agent, sessionId> (エージェントごとのセッションID)
|
||||||
- `stepOutputs`: Map<stepName, AgentResponse> (各ステップの出力)
|
- `stepOutputs`: Map<stepName, AgentResponse> (各ステップの出力)
|
||||||
@ -540,20 +540,20 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
|||||||
- `determineNextStepByRules()` で次ステップ名を取得
|
- `determineNextStepByRules()` で次ステップ名を取得
|
||||||
|
|
||||||
**データ入力**:
|
**データ入力**:
|
||||||
- `WorkflowConfig`
|
- `PieceConfig`
|
||||||
- `cwd: string`
|
- `cwd: string`
|
||||||
- `task: string`
|
- `task: string`
|
||||||
- `WorkflowEngineOptions`
|
- `PieceEngineOptions`
|
||||||
|
|
||||||
**データ出力**:
|
**データ出力**:
|
||||||
- `WorkflowState` (最終状態)
|
- `PieceState` (最終状態)
|
||||||
- イベント発行 (各ステップの進捗)
|
- イベント発行 (各ステップの進捗)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### 6. Instruction Building & Step Execution Layer
|
### 6. Instruction Building & Step Execution Layer
|
||||||
|
|
||||||
#### 6.1 Step Execution (`src/core/workflow/engine/StepExecutor.ts`)
|
#### 6.1 Step Execution (`src/core/piece/engine/StepExecutor.ts`)
|
||||||
|
|
||||||
**役割**: 3フェーズモデルによるステップ実行
|
**役割**: 3フェーズモデルによるステップ実行
|
||||||
|
|
||||||
@ -604,7 +604,7 @@ const match = await detectMatchedRule(step, response.content, tagContent, {...})
|
|||||||
- `InstructionBuilder` を使用してインストラクション文字列を生成
|
- `InstructionBuilder` を使用してインストラクション文字列を生成
|
||||||
- コンテキスト情報を渡す
|
- コンテキスト情報を渡す
|
||||||
|
|
||||||
#### 6.2 Instruction Building (`src/core/workflow/instruction/InstructionBuilder.ts`)
|
#### 6.2 Instruction Building (`src/core/piece/instruction/InstructionBuilder.ts`)
|
||||||
|
|
||||||
**役割**: Phase 1用のインストラクション文字列生成
|
**役割**: Phase 1用のインストラクション文字列生成
|
||||||
|
|
||||||
@ -614,8 +614,8 @@ const match = await detectMatchedRule(step, response.content, tagContent, {...})
|
|||||||
- Working directory
|
- Working directory
|
||||||
- Permission rules (edit mode)
|
- Permission rules (edit mode)
|
||||||
|
|
||||||
2. **Workflow Context**:
|
2. **Piece Context**:
|
||||||
- Iteration (workflow-wide)
|
- Iteration (piece-wide)
|
||||||
- Step Iteration (per-step)
|
- Step Iteration (per-step)
|
||||||
- Step name
|
- Step name
|
||||||
- Report Directory/File info
|
- Report Directory/File info
|
||||||
@ -642,7 +642,7 @@ const match = await detectMatchedRule(step, response.content, tagContent, {...})
|
|||||||
- `{task}`: ユーザーリクエスト
|
- `{task}`: ユーザーリクエスト
|
||||||
- `{previous_response}`: 前ステップの出力
|
- `{previous_response}`: 前ステップの出力
|
||||||
- `{user_inputs}`: 追加ユーザー入力
|
- `{user_inputs}`: 追加ユーザー入力
|
||||||
- `{iteration}`: ワークフロー全体のイテレーション
|
- `{iteration}`: ピース全体のイテレーション
|
||||||
- `{max_iterations}`: 最大イテレーション
|
- `{max_iterations}`: 最大イテレーション
|
||||||
- `{step_iteration}`: ステップのイテレーション
|
- `{step_iteration}`: ステップのイテレーション
|
||||||
- `{report_dir}`: レポートディレクトリ
|
- `{report_dir}`: レポートディレクトリ
|
||||||
@ -753,9 +753,9 @@ async call(
|
|||||||
|
|
||||||
### ステージ2: 実行環境準備
|
### ステージ2: 実行環境準備
|
||||||
|
|
||||||
**ワークフロー選択**:
|
**ピース選択**:
|
||||||
- `--workflow` フラグ → 検証
|
- `--piece` フラグ → 検証
|
||||||
- なし → インタラクティブ選択 (`selectWorkflow()`)
|
- なし → インタラクティブ選択 (`selectPiece()`)
|
||||||
|
|
||||||
**Worktree作成** (オプション):
|
**Worktree作成** (オプション):
|
||||||
- `confirmAndCreateWorktree()`:
|
- `confirmAndCreateWorktree()`:
|
||||||
@ -764,21 +764,21 @@ async call(
|
|||||||
- `createSharedClone()`: git clone --shared
|
- `createSharedClone()`: git clone --shared
|
||||||
|
|
||||||
**データ**:
|
**データ**:
|
||||||
- `workflowIdentifier: string`
|
- `pieceIdentifier: string`
|
||||||
- `{ execCwd, isWorktree, branch }`
|
- `{ execCwd, isWorktree, branch }`
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### ステージ3: ワークフロー実行初期化
|
### ステージ3: ピース実行初期化
|
||||||
|
|
||||||
**セッション管理**:
|
**セッション管理**:
|
||||||
- `loadAgentSessions()`: 保存済みセッション復元
|
- `loadAgentSessions()`: 保存済みセッション復元
|
||||||
- `generateSessionId()`: ワークフローセッションID生成
|
- `generateSessionId()`: ピースセッションID生成
|
||||||
- `initNdjsonLog()`: NDJSON ログファイル作成
|
- `initNdjsonLog()`: NDJSON ログファイル作成
|
||||||
|
|
||||||
**WorkflowEngine作成**:
|
**PieceEngine作成**:
|
||||||
```typescript
|
```typescript
|
||||||
new WorkflowEngine(workflowConfig, cwd, task, {
|
new PieceEngine(pieceConfig, cwd, task, {
|
||||||
onStream,
|
onStream,
|
||||||
initialSessions,
|
initialSessions,
|
||||||
onSessionUpdate,
|
onSessionUpdate,
|
||||||
@ -790,7 +790,7 @@ new WorkflowEngine(workflowConfig, cwd, task, {
|
|||||||
```
|
```
|
||||||
|
|
||||||
**データ**:
|
**データ**:
|
||||||
- `WorkflowState`: 初期状態
|
- `PieceState`: 初期状態
|
||||||
- `currentStep = config.initialStep`
|
- `currentStep = config.initialStep`
|
||||||
- `iteration = 0`
|
- `iteration = 0`
|
||||||
- `agentSessions = initialSessions`
|
- `agentSessions = initialSessions`
|
||||||
@ -871,8 +871,8 @@ new WorkflowEngine(workflowConfig, cwd, task, {
|
|||||||
**遷移**:
|
**遷移**:
|
||||||
- `determineNextStepByRules()`: `rules[index].next` を取得
|
- `determineNextStepByRules()`: `rules[index].next` を取得
|
||||||
- 特殊ステップ:
|
- 特殊ステップ:
|
||||||
- `COMPLETE`: ワークフロー完了
|
- `COMPLETE`: ピース完了
|
||||||
- `ABORT`: ワークフロー中断
|
- `ABORT`: ピース中断
|
||||||
- 通常ステップ: `state.currentStep = nextStep`
|
- 通常ステップ: `state.currentStep = nextStep`
|
||||||
|
|
||||||
---
|
---
|
||||||
@ -891,7 +891,7 @@ function buildTaskFromHistory(history: ConversationMessage[]): string {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**重要性**: インタラクティブモードで蓄積された会話全体が、後続のワークフロー実行で単一の `task` 文字列として扱われる。
|
**重要性**: インタラクティブモードで蓄積された会話全体が、後続のピース実行で単一の `task` 文字列として扱われる。
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@ -912,15 +912,15 @@ await summarizeTaskName(task, { cwd })
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### 3. ワークフロー設定 → WorkflowState
|
### 3. ピース設定 → PieceState
|
||||||
|
|
||||||
**場所**: `src/core/workflow/engine/state-manager.ts`
|
**場所**: `src/core/piece/engine/state-manager.ts`
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
function createInitialState(
|
function createInitialState(
|
||||||
config: WorkflowConfig,
|
config: PieceConfig,
|
||||||
options: WorkflowEngineOptions
|
options: PieceEngineOptions
|
||||||
): WorkflowState {
|
): PieceState {
|
||||||
return {
|
return {
|
||||||
status: 'running',
|
status: 'running',
|
||||||
currentStep: config.initialStep,
|
currentStep: config.initialStep,
|
||||||
@ -939,10 +939,10 @@ function createInitialState(
|
|||||||
|
|
||||||
### 4. コンテキスト → インストラクション文字列
|
### 4. コンテキスト → インストラクション文字列
|
||||||
|
|
||||||
**場所**: `src/core/workflow/instruction/InstructionBuilder.ts`
|
**場所**: `src/core/piece/instruction/InstructionBuilder.ts`
|
||||||
|
|
||||||
**入力**:
|
**入力**:
|
||||||
- `step: WorkflowStep`
|
- `step: PieceStep`
|
||||||
- `context: InstructionContext` (task, iteration, previousOutput, userInputs, など)
|
- `context: InstructionContext` (task, iteration, previousOutput, userInputs, など)
|
||||||
|
|
||||||
**処理**:
|
**処理**:
|
||||||
@ -958,13 +958,13 @@ function createInitialState(
|
|||||||
|
|
||||||
### 5. AgentResponse → ルールマッチ
|
### 5. AgentResponse → ルールマッチ
|
||||||
|
|
||||||
**場所**: `src/core/workflow/evaluation/RuleEvaluator.ts`
|
**場所**: `src/core/piece/evaluation/RuleEvaluator.ts`
|
||||||
|
|
||||||
**入力**:
|
**入力**:
|
||||||
- `step: WorkflowStep`
|
- `step: PieceStep`
|
||||||
- `content: string` (Phase 1 output)
|
- `content: string` (Phase 1 output)
|
||||||
- `tagContent: string` (Phase 3 output)
|
- `tagContent: string` (Phase 3 output)
|
||||||
- `state: WorkflowState`
|
- `state: PieceState`
|
||||||
|
|
||||||
**処理**:
|
**処理**:
|
||||||
1. タグ検出 (`[STEP:0]`, `[STEP:1]`, ...)
|
1. タグ検出 (`[STEP:0]`, `[STEP:1]`, ...)
|
||||||
@ -979,11 +979,11 @@ function createInitialState(
|
|||||||
|
|
||||||
### 6. ルールマッチ → 次ステップ名
|
### 6. ルールマッチ → 次ステップ名
|
||||||
|
|
||||||
**場所**: `src/core/workflow/engine/transitions.ts`
|
**場所**: `src/core/piece/engine/transitions.ts`
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
function determineNextStepByRules(
|
function determineNextStepByRules(
|
||||||
step: WorkflowStep,
|
step: PieceStep,
|
||||||
matchedRuleIndex: number
|
matchedRuleIndex: number
|
||||||
): string | null {
|
): string | null {
|
||||||
const rule = step.rules?.[matchedRuleIndex];
|
const rule = step.rules?.[matchedRuleIndex];
|
||||||
@ -1022,7 +1022,7 @@ TAKTのデータフローは、**7つのレイヤー**を通じて、ユーザ
|
|||||||
1. **Progressive Transformation**: データは各レイヤーで少しずつ変換され、次のレイヤーに渡される
|
1. **Progressive Transformation**: データは各レイヤーで少しずつ変換され、次のレイヤーに渡される
|
||||||
2. **Context Accumulation**: タスク、イテレーション、ユーザー入力などのコンテキストが蓄積される
|
2. **Context Accumulation**: タスク、イテレーション、ユーザー入力などのコンテキストが蓄積される
|
||||||
3. **Session Continuity**: エージェントセッションIDが保存・復元され、会話の継続性を保つ
|
3. **Session Continuity**: エージェントセッションIDが保存・復元され、会話の継続性を保つ
|
||||||
4. **Event-Driven Architecture**: WorkflowEngineがイベントを発行し、UI、ログ、通知が連携
|
4. **Event-Driven Architecture**: PieceEngineがイベントを発行し、UI、ログ、通知が連携
|
||||||
5. **3-Phase Execution**: メイン実行、レポート出力、ステータス判断の3段階で、明確な責任分離
|
5. **3-Phase Execution**: メイン実行、レポート出力、ステータス判断の3段階で、明確な責任分離
|
||||||
6. **Rule-Based Routing**: ルール評価の5段階フォールバックで、柔軟かつ予測可能な遷移
|
6. **Rule-Based Routing**: ルール評価の5段階フォールバックで、柔軟かつ予測可能な遷移
|
||||||
|
|
||||||
|
|||||||
@ -1,29 +1,29 @@
|
|||||||
# Workflow Guide
|
# Piece Guide
|
||||||
|
|
||||||
This guide explains how to create and customize TAKT workflows.
|
This guide explains how to create and customize TAKT pieces.
|
||||||
|
|
||||||
## Workflow Basics
|
## Piece Basics
|
||||||
|
|
||||||
A workflow is a YAML file that defines a sequence of steps executed by AI agents. Each step specifies:
|
A piece is a YAML file that defines a sequence of steps executed by AI agents. Each step specifies:
|
||||||
- Which agent to use
|
- Which agent to use
|
||||||
- What instructions to give
|
- What instructions to give
|
||||||
- Rules for routing to the next step
|
- Rules for routing to the next step
|
||||||
|
|
||||||
## File Locations
|
## File Locations
|
||||||
|
|
||||||
- Builtin workflows are embedded in the npm package (`dist/resources/`)
|
- Builtin pieces are embedded in the npm package (`dist/resources/`)
|
||||||
- `~/.takt/workflows/` — User workflows (override builtins with the same name)
|
- `~/.takt/pieces/` — User pieces (override builtins with the same name)
|
||||||
- Use `takt eject <workflow>` to copy a builtin to `~/.takt/workflows/` for customization
|
- Use `takt eject <piece>` to copy a builtin to `~/.takt/pieces/` for customization
|
||||||
|
|
||||||
## Workflow Categories
|
## Piece Categories
|
||||||
|
|
||||||
ワークフローの選択 UI をカテゴリ分けしたい場合は、`workflow_categories` を設定します。
|
ピースの選択 UI をカテゴリ分けしたい場合は、`piece_categories` を設定します。
|
||||||
詳細は `docs/workflow-categories.md` を参照してください。
|
詳細は `docs/piece-categories.md` を参照してください。
|
||||||
|
|
||||||
## Workflow Schema
|
## Piece Schema
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
name: my-workflow
|
name: my-piece
|
||||||
description: Optional description
|
description: Optional description
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
initial_step: first-step # Optional, defaults to first step
|
initial_step: first-step # Optional, defaults to first step
|
||||||
@ -54,11 +54,11 @@ steps:
|
|||||||
| Variable | Description |
|
| Variable | Description |
|
||||||
|----------|-------------|
|
|----------|-------------|
|
||||||
| `{task}` | Original user request (auto-injected if not in template) |
|
| `{task}` | Original user request (auto-injected if not in template) |
|
||||||
| `{iteration}` | Workflow-wide turn count (total steps executed) |
|
| `{iteration}` | Piece-wide turn count (total steps executed) |
|
||||||
| `{max_iterations}` | Maximum iterations allowed |
|
| `{max_iterations}` | Maximum iterations allowed |
|
||||||
| `{step_iteration}` | Per-step iteration count (how many times THIS step has run) |
|
| `{step_iteration}` | Per-step iteration count (how many times THIS step has run) |
|
||||||
| `{previous_response}` | Previous step's output (auto-injected if not in template) |
|
| `{previous_response}` | Previous step's output (auto-injected if not in template) |
|
||||||
| `{user_inputs}` | Additional user inputs during workflow (auto-injected if not in template) |
|
| `{user_inputs}` | Additional user inputs during piece (auto-injected if not in template) |
|
||||||
| `{report_dir}` | Report directory path (e.g., `.takt/reports/20250126-143052-task-summary`) |
|
| `{report_dir}` | Report directory path (e.g., `.takt/reports/20250126-143052-task-summary`) |
|
||||||
| `{report:filename}` | Resolves to `{report_dir}/filename` (e.g., `{report:00-plan.md}`) |
|
| `{report:filename}` | Resolves to `{report_dir}/filename` (e.g., `{report:00-plan.md}`) |
|
||||||
|
|
||||||
@ -88,8 +88,8 @@ rules:
|
|||||||
|
|
||||||
### Special `next` Values
|
### Special `next` Values
|
||||||
|
|
||||||
- `COMPLETE` — End workflow successfully
|
- `COMPLETE` — End piece successfully
|
||||||
- `ABORT` — End workflow with failure
|
- `ABORT` — End piece with failure
|
||||||
|
|
||||||
### Rule Field: `appendix`
|
### Rule Field: `appendix`
|
||||||
|
|
||||||
@ -166,7 +166,7 @@ report:
|
|||||||
|
|
||||||
## Examples
|
## Examples
|
||||||
|
|
||||||
### Simple Implementation Workflow
|
### Simple Implementation Piece
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
name: simple-impl
|
name: simple-impl
|
||||||
@ -250,8 +250,8 @@ steps:
|
|||||||
|
|
||||||
## Best Practices
|
## Best Practices
|
||||||
|
|
||||||
1. **Keep iterations reasonable** — 10-30 is typical for development workflows
|
1. **Keep iterations reasonable** — 10-30 is typical for development pieces
|
||||||
2. **Use `edit: false` for review steps** — Prevent reviewers from modifying code
|
2. **Use `edit: false` for review steps** — Prevent reviewers from modifying code
|
||||||
3. **Use descriptive step names** — Makes logs easier to read
|
3. **Use descriptive step names** — Makes logs easier to read
|
||||||
4. **Test workflows incrementally** — Start simple, add complexity
|
4. **Test pieces incrementally** — Start simple, add complexity
|
||||||
5. **Use `/eject` to customize** — Copy a builtin as starting point rather than writing from scratch
|
5. **Use `/eject` to customize** — Copy a builtin as starting point rather than writing from scratch
|
||||||
@ -10,11 +10,11 @@
|
|||||||
- これ、エージェントのデータを挿入してないの……?
|
- これ、エージェントのデータを挿入してないの……?
|
||||||
- 全体的に
|
- 全体的に
|
||||||
- 音楽にひもづける
|
- 音楽にひもづける
|
||||||
- つまり、workflowsをやめて pieces にする
|
- つまり、piecesをやめて pieces にする
|
||||||
- 現workflowファイルにあるstepsもmovementsにする(全ファイルの修正)
|
- 現pieceファイルにあるstepsもmovementsにする(全ファイルの修正)
|
||||||
- stepという言葉はmovementになる。phaseもmovementが適しているだろう(これは interactive における phase のことをいっていない)
|
- stepという言葉はmovementになる。phaseもmovementが適しているだろう(これは interactive における phase のことをいっていない)
|
||||||
- _language パラメータは消せ
|
- _language パラメータは消せ
|
||||||
- ワークフローを指定すると実際に送られるプロンプトを組み立てて表示する機能かツールを作れるか
|
- ピースを指定すると実際に送られるプロンプトを組み立てて表示する機能かツールを作れるか
|
||||||
- メタ領域を用意して説明、どこで利用されるかの説明、使えるテンプレートとその説明をかいて、その他必要な情報あれば入れて。
|
- メタ領域を用意して説明、どこで利用されるかの説明、使えるテンプレートとその説明をかいて、その他必要な情報あれば入れて。
|
||||||
- 英語と日本語が共通でもかならずファイルはわけて同じ文章を書いておく
|
- 英語と日本語が共通でもかならずファイルはわけて同じ文章を書いておく
|
||||||
- 無駄な空行とか消してほしい
|
- 無駄な空行とか消してほしい
|
||||||
|
|||||||
@ -1,7 +1,7 @@
|
|||||||
{
|
{
|
||||||
"name": "takt",
|
"name": "takt",
|
||||||
"version": "0.4.1",
|
"version": "0.4.1",
|
||||||
"description": "TAKT: Task Agent Koordination Tool - AI Agent Workflow Orchestration",
|
"description": "TAKT: Task Agent Koordination Tool - AI Agent Piece Orchestration",
|
||||||
"main": "dist/index.js",
|
"main": "dist/index.js",
|
||||||
"types": "dist/index.d.ts",
|
"types": "dist/index.d.ts",
|
||||||
"bin": {
|
"bin": {
|
||||||
@ -25,7 +25,7 @@
|
|||||||
"ai",
|
"ai",
|
||||||
"agent",
|
"agent",
|
||||||
"orchestration",
|
"orchestration",
|
||||||
"workflow",
|
"piece",
|
||||||
"automation",
|
"automation",
|
||||||
"llm",
|
"llm",
|
||||||
"anthropic"
|
"anthropic"
|
||||||
|
|||||||
@ -130,8 +130,8 @@ AI is confidently wrong—code that looks plausible but doesn't work, solutions
|
|||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
// ❌ Bad example - All callers omit
|
// ❌ Bad example - All callers omit
|
||||||
function loadWorkflow(name: string, cwd = process.cwd()) { ... }
|
function loadPiece(name: string, cwd = process.cwd()) { ... }
|
||||||
// All callers: loadWorkflow('default') ← not passing cwd
|
// All callers: loadPiece('default') ← not passing cwd
|
||||||
// Problem: Can't tell where cwd value comes from by reading call sites
|
// Problem: Can't tell where cwd value comes from by reading call sites
|
||||||
// Fix: Make cwd required, pass explicitly at call sites
|
// Fix: Make cwd required, pass explicitly at call sites
|
||||||
|
|
||||||
|
|||||||
@ -56,7 +56,7 @@ Code is read far more often than it is written. Poorly structured code destroys
|
|||||||
|
|
||||||
**To avoid false positives:**
|
**To avoid false positives:**
|
||||||
1. Before flagging "hardcoded values", **verify if the file is source or report**
|
1. Before flagging "hardcoded values", **verify if the file is source or report**
|
||||||
2. Files under `.takt/reports/` are generated during workflow execution - not review targets
|
2. Files under `.takt/reports/` are generated during piece execution - not review targets
|
||||||
3. Ignore generated files even if they appear in git diff
|
3. Ignore generated files even if they appear in git diff
|
||||||
|
|
||||||
## Review Perspectives
|
## Review Perspectives
|
||||||
@ -186,7 +186,7 @@ for (const transition of step.transitions) {
|
|||||||
export function matchesCondition(status: Status, condition: TransitionCondition): boolean {
|
export function matchesCondition(status: Status, condition: TransitionCondition): boolean {
|
||||||
|
|
||||||
// ✅ OK - Design decision (Why)
|
// ✅ OK - Design decision (Why)
|
||||||
// User interruption takes priority over workflow-defined transitions
|
// User interruption takes priority over piece-defined transitions
|
||||||
if (status === 'interrupted') {
|
if (status === 'interrupted') {
|
||||||
return ABORT_STEP;
|
return ABORT_STEP;
|
||||||
}
|
}
|
||||||
@ -361,7 +361,7 @@ function createUser(data: UserData) {
|
|||||||
- Documentation schema descriptions are updated
|
- Documentation schema descriptions are updated
|
||||||
- Existing config files are compatible with new schema
|
- Existing config files are compatible with new schema
|
||||||
|
|
||||||
3. When workflow definitions are modified:
|
3. When piece definitions are modified:
|
||||||
- Correct fields used for step type (normal vs. parallel)
|
- Correct fields used for step type (normal vs. parallel)
|
||||||
- No unnecessary fields remaining (e.g., `next` on parallel sub-steps)
|
- No unnecessary fields remaining (e.g., `next` on parallel sub-steps)
|
||||||
|
|
||||||
|
|||||||
@ -20,7 +20,7 @@ you verify "**was the right thing built (Validation)**".
|
|||||||
|
|
||||||
## Human-in-the-Loop Checkpoint
|
## Human-in-the-Loop Checkpoint
|
||||||
|
|
||||||
You are the **human proxy** in the automated workflow. Before approval, verify the following.
|
You are the **human proxy** in the automated piece. Before approval, verify the following.
|
||||||
|
|
||||||
**Ask yourself what a human reviewer would check:**
|
**Ask yourself what a human reviewer would check:**
|
||||||
- Does this really solve the user's problem?
|
- Does this really solve the user's problem?
|
||||||
@ -92,16 +92,16 @@ Check:
|
|||||||
|
|
||||||
**REJECT if spec violations are found.** Don't assume "probably correct"—actually read and cross-reference the specs.
|
**REJECT if spec violations are found.** Don't assume "probably correct"—actually read and cross-reference the specs.
|
||||||
|
|
||||||
### 7. Workflow Overall Review
|
### 7. Piece Overall Review
|
||||||
|
|
||||||
**Check all reports in the report directory and verify overall workflow consistency.**
|
**Check all reports in the report directory and verify overall piece consistency.**
|
||||||
|
|
||||||
Check:
|
Check:
|
||||||
- Does implementation match the plan (00-plan.md)?
|
- Does implementation match the plan (00-plan.md)?
|
||||||
- Were all review step issues properly addressed?
|
- Were all review step issues properly addressed?
|
||||||
- Was the original task objective achieved?
|
- Was the original task objective achieved?
|
||||||
|
|
||||||
**Workflow-wide issues:**
|
**Piece-wide issues:**
|
||||||
| Issue | Action |
|
| Issue | Action |
|
||||||
|-------|--------|
|
|-------|--------|
|
||||||
| Plan-implementation gap | REJECT - Request plan revision or implementation fix |
|
| Plan-implementation gap | REJECT - Request plan revision or implementation fix |
|
||||||
|
|||||||
@ -7,8 +7,8 @@ language: en
|
|||||||
# Trusted directories - projects in these directories skip confirmation prompts
|
# Trusted directories - projects in these directories skip confirmation prompts
|
||||||
trusted_directories: []
|
trusted_directories: []
|
||||||
|
|
||||||
# Default workflow to use when no workflow is specified
|
# Default piece to use when no piece is specified
|
||||||
default_workflow: default
|
default_piece: default
|
||||||
|
|
||||||
# Log level: debug, info, warn, error
|
# Log level: debug, info, warn, error
|
||||||
log_level: info
|
log_level: info
|
||||||
@ -16,8 +16,8 @@ log_level: info
|
|||||||
# Provider runtime: claude or codex
|
# Provider runtime: claude or codex
|
||||||
provider: claude
|
provider: claude
|
||||||
|
|
||||||
# Builtin workflows (resources/global/{lang}/workflows)
|
# Builtin pieces (resources/global/{lang}/pieces)
|
||||||
# enable_builtin_workflows: true
|
# enable_builtin_pieces: true
|
||||||
|
|
||||||
# Default model (optional)
|
# Default model (optional)
|
||||||
# Claude: opus, sonnet, haiku, opusplan, default, or full model name
|
# Claude: opus, sonnet, haiku, opusplan, default, or full model name
|
||||||
|
|||||||
@ -1,11 +1,11 @@
|
|||||||
workflow_categories:
|
piece_categories:
|
||||||
"🚀 Quick Start":
|
"🚀 Quick Start":
|
||||||
workflows:
|
pieces:
|
||||||
- minimal
|
- minimal
|
||||||
- default
|
- default
|
||||||
|
|
||||||
"🔍 Review & Fix":
|
"🔍 Review & Fix":
|
||||||
workflows:
|
pieces:
|
||||||
- review-fix-minimal
|
- review-fix-minimal
|
||||||
|
|
||||||
"🎨 Frontend":
|
"🎨 Frontend":
|
||||||
@ -16,12 +16,12 @@ workflow_categories:
|
|||||||
|
|
||||||
"🔧 Expert":
|
"🔧 Expert":
|
||||||
"Full Stack":
|
"Full Stack":
|
||||||
workflows:
|
pieces:
|
||||||
- expert
|
- expert
|
||||||
- expert-cqrs
|
- expert-cqrs
|
||||||
|
|
||||||
"Others":
|
"Others":
|
||||||
workflows:
|
pieces:
|
||||||
- research
|
- research
|
||||||
- magi
|
- magi
|
||||||
- review-only
|
- review-only
|
||||||
|
|||||||
@ -1,26 +1,26 @@
|
|||||||
# Default TAKT Workflow
|
# Default TAKT Piece
|
||||||
# Plan -> Architect -> Implement -> AI Review -> Reviewers (parallel: Architect + Security) -> Supervisor Approval
|
# Plan -> Architect -> Implement -> AI Review -> Reviewers (parallel: Architect + Security) -> Supervisor Approval
|
||||||
#
|
#
|
||||||
# Boilerplate sections (Workflow Context, User Request, Previous Response,
|
# Boilerplate sections (Piece Context, User Request, Previous Response,
|
||||||
# Additional User Inputs, Instructions heading) are auto-injected by buildInstruction().
|
# Additional User Inputs, Instructions heading) are auto-injected by buildInstruction().
|
||||||
# Only movement-specific content belongs in instruction_template.
|
# Only movement-specific content belongs in instruction_template.
|
||||||
#
|
#
|
||||||
# Template Variables (available in instruction_template):
|
# Template Variables (available in instruction_template):
|
||||||
# {iteration} - Workflow-wide turn count (total movements executed across all agents)
|
# {iteration} - Piece-wide turn count (total movements executed across all agents)
|
||||||
# {max_iterations} - Maximum iterations allowed for the workflow
|
# {max_iterations} - Maximum iterations allowed for the piece
|
||||||
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
||||||
# {previous_response} - Output from the previous movement (only when pass_previous_response: true)
|
# {previous_response} - Output from the previous movement (only when pass_previous_response: true)
|
||||||
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
||||||
#
|
#
|
||||||
# Movement-level Fields:
|
# Movement-level Fields:
|
||||||
# report: - Report file(s) for the movement (auto-injected as Report File/Files in Workflow Context)
|
# report: - Report file(s) for the movement (auto-injected as Report File/Files in Piece Context)
|
||||||
# Single: report: 00-plan.md
|
# Single: report: 00-plan.md
|
||||||
# Multiple: report:
|
# Multiple: report:
|
||||||
# - Scope: 01-coder-scope.md
|
# - Scope: 01-coder-scope.md
|
||||||
# - Decisions: 02-coder-decisions.md
|
# - Decisions: 02-coder-decisions.md
|
||||||
|
|
||||||
name: default
|
name: default
|
||||||
description: Standard development workflow with planning and specialized reviews
|
description: Standard development piece with planning and specialized reviews
|
||||||
|
|
||||||
max_iterations: 30
|
max_iterations: 30
|
||||||
|
|
||||||
@ -183,7 +183,7 @@ movements:
|
|||||||
- Plan: {report:00-plan.md}
|
- Plan: {report:00-plan.md}
|
||||||
- Design: {report:01-architecture.md} (if exists)
|
- Design: {report:01-architecture.md} (if exists)
|
||||||
|
|
||||||
Use only the Report Directory files shown in Workflow Context. Do not search or open reports outside that directory.
|
Use only the Report Directory files shown in Piece Context. Do not search or open reports outside that directory.
|
||||||
|
|
||||||
**Important:** Do not make design decisions; follow the design determined in the architect movement.
|
**Important:** Do not make design decisions; follow the design determined in the architect movement.
|
||||||
Report if you encounter unclear points or need design changes.
|
Report if you encounter unclear points or need design changes.
|
||||||
@ -515,7 +515,7 @@ movements:
|
|||||||
instruction_template: |
|
instruction_template: |
|
||||||
Run tests, verify the build, and perform final approval.
|
Run tests, verify the build, and perform final approval.
|
||||||
|
|
||||||
**Workflow Overall Review:**
|
**Piece Overall Review:**
|
||||||
1. Does the implementation match the plan ({report:00-plan.md}) and design ({report:01-architecture.md}, if exists)?
|
1. Does the implementation match the plan ({report:00-plan.md}) and design ({report:01-architecture.md}, if exists)?
|
||||||
2. Were all review movement issues addressed?
|
2. Were all review movement issues addressed?
|
||||||
3. Was the original task objective achieved?
|
3. Was the original task objective achieved?
|
||||||
|
|||||||
@ -1,5 +1,5 @@
|
|||||||
# Expert CQRS Review Workflow
|
# Expert CQRS Review Piece
|
||||||
# Review workflow with CQRS+ES, Frontend, Security, and QA experts
|
# Review piece with CQRS+ES, Frontend, Security, and QA experts
|
||||||
#
|
#
|
||||||
# Flow:
|
# Flow:
|
||||||
# plan -> implement -> ai_review -> reviewers (parallel) -> supervise -> COMPLETE
|
# plan -> implement -> ai_review -> reviewers (parallel) -> supervise -> COMPLETE
|
||||||
@ -10,12 +10,12 @@
|
|||||||
# any("needs_fix") → fix → reviewers
|
# any("needs_fix") → fix → reviewers
|
||||||
#
|
#
|
||||||
# Template Variables:
|
# Template Variables:
|
||||||
# {iteration} - Workflow-wide turn count (total movements executed across all agents)
|
# {iteration} - Piece-wide turn count (total movements executed across all agents)
|
||||||
# {max_iterations} - Maximum iterations allowed for the workflow
|
# {max_iterations} - Maximum iterations allowed for the piece
|
||||||
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
||||||
# {task} - Original user request
|
# {task} - Original user request
|
||||||
# {previous_response} - Output from the previous movement
|
# {previous_response} - Output from the previous movement
|
||||||
# {user_inputs} - Accumulated user inputs during workflow
|
# {user_inputs} - Accumulated user inputs during piece
|
||||||
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
||||||
|
|
||||||
name: expert-cqrs
|
name: expert-cqrs
|
||||||
@ -101,7 +101,7 @@ movements:
|
|||||||
instruction_template: |
|
instruction_template: |
|
||||||
Follow the plan from the plan movement and implement.
|
Follow the plan from the plan movement and implement.
|
||||||
Refer to the plan report ({report:00-plan.md}) and proceed with implementation.
|
Refer to the plan report ({report:00-plan.md}) and proceed with implementation.
|
||||||
Use only the Report Directory files shown in Workflow Context. Do not search or open reports outside that directory.
|
Use only the Report Directory files shown in Piece Context. Do not search or open reports outside that directory.
|
||||||
|
|
||||||
**Scope report format (create at implementation start):**
|
**Scope report format (create at implementation start):**
|
||||||
```markdown
|
```markdown
|
||||||
@ -550,7 +550,7 @@ movements:
|
|||||||
|
|
||||||
Run tests, verify the build, and perform final approval.
|
Run tests, verify the build, and perform final approval.
|
||||||
|
|
||||||
**Workflow Overall Review:**
|
**Piece Overall Review:**
|
||||||
1. Does the implementation match the plan ({report:00-plan.md})?
|
1. Does the implementation match the plan ({report:00-plan.md})?
|
||||||
2. Were all review movement issues addressed?
|
2. Were all review movement issues addressed?
|
||||||
3. Was the original task objective achieved?
|
3. Was the original task objective achieved?
|
||||||
|
|||||||
@ -1,5 +1,5 @@
|
|||||||
# Expert Review Workflow
|
# Expert Review Piece
|
||||||
# Review workflow with Architecture, Frontend, Security, and QA experts
|
# Review piece with Architecture, Frontend, Security, and QA experts
|
||||||
#
|
#
|
||||||
# Flow:
|
# Flow:
|
||||||
# plan -> implement -> ai_review -> reviewers (parallel) -> supervise -> COMPLETE
|
# plan -> implement -> ai_review -> reviewers (parallel) -> supervise -> COMPLETE
|
||||||
@ -12,19 +12,19 @@
|
|||||||
# AI review runs immediately after implementation to catch AI-specific issues early,
|
# AI review runs immediately after implementation to catch AI-specific issues early,
|
||||||
# before expert reviews begin.
|
# before expert reviews begin.
|
||||||
#
|
#
|
||||||
# Boilerplate sections (Workflow Context, User Request, Previous Response,
|
# Boilerplate sections (Piece Context, User Request, Previous Response,
|
||||||
# Additional User Inputs, Instructions heading) are auto-injected by buildInstruction().
|
# Additional User Inputs, Instructions heading) are auto-injected by buildInstruction().
|
||||||
# Only movement-specific content belongs in instruction_template.
|
# Only movement-specific content belongs in instruction_template.
|
||||||
#
|
#
|
||||||
# Template Variables (available in instruction_template):
|
# Template Variables (available in instruction_template):
|
||||||
# {iteration} - Workflow-wide turn count (total movements executed across all agents)
|
# {iteration} - Piece-wide turn count (total movements executed across all agents)
|
||||||
# {max_iterations} - Maximum iterations allowed for the workflow
|
# {max_iterations} - Maximum iterations allowed for the piece
|
||||||
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
||||||
# {previous_response} - Output from the previous movement (only when pass_previous_response: true)
|
# {previous_response} - Output from the previous movement (only when pass_previous_response: true)
|
||||||
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
||||||
#
|
#
|
||||||
# Movement-level Fields:
|
# Movement-level Fields:
|
||||||
# report: - Report file(s) for the movement (auto-injected as Report File/Files in Workflow Context)
|
# report: - Report file(s) for the movement (auto-injected as Report File/Files in Piece Context)
|
||||||
# Single: report: 00-plan.md
|
# Single: report: 00-plan.md
|
||||||
# Multiple: report:
|
# Multiple: report:
|
||||||
# - Scope: 01-coder-scope.md
|
# - Scope: 01-coder-scope.md
|
||||||
@ -113,7 +113,7 @@ movements:
|
|||||||
instruction_template: |
|
instruction_template: |
|
||||||
Follow the plan from the plan movement and implement.
|
Follow the plan from the plan movement and implement.
|
||||||
Refer to the plan report ({report:00-plan.md}) and proceed with implementation.
|
Refer to the plan report ({report:00-plan.md}) and proceed with implementation.
|
||||||
Use only the Report Directory files shown in Workflow Context. Do not search or open reports outside that directory.
|
Use only the Report Directory files shown in Piece Context. Do not search or open reports outside that directory.
|
||||||
|
|
||||||
**Scope report format (create at implementation start):**
|
**Scope report format (create at implementation start):**
|
||||||
```markdown
|
```markdown
|
||||||
@ -563,7 +563,7 @@ movements:
|
|||||||
|
|
||||||
Run tests, verify the build, and perform final approval.
|
Run tests, verify the build, and perform final approval.
|
||||||
|
|
||||||
**Workflow Overall Review:**
|
**Piece Overall Review:**
|
||||||
1. Does the implementation match the plan ({report:00-plan.md})?
|
1. Does the implementation match the plan ({report:00-plan.md})?
|
||||||
2. Were all review movement issues addressed?
|
2. Were all review movement issues addressed?
|
||||||
3. Was the original task objective achieved?
|
3. Was the original task objective achieved?
|
||||||
|
|||||||
@ -1,14 +1,14 @@
|
|||||||
# MAGI System Workflow
|
# MAGI System Piece
|
||||||
# A deliberation workflow modeled after Evangelion's MAGI system
|
# A deliberation piece modeled after Evangelion's MAGI system
|
||||||
# Three personas (scientist, nurturer, pragmatist) analyze from different perspectives and vote
|
# Three personas (scientist, nurturer, pragmatist) analyze from different perspectives and vote
|
||||||
#
|
#
|
||||||
# Template Variables:
|
# Template Variables:
|
||||||
# {iteration} - Workflow-wide turn count (total movements executed across all agents)
|
# {iteration} - Piece-wide turn count (total movements executed across all agents)
|
||||||
# {max_iterations} - Maximum iterations allowed for the workflow
|
# {max_iterations} - Maximum iterations allowed for the piece
|
||||||
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
||||||
# {task} - Original user request
|
# {task} - Original user request
|
||||||
# {previous_response} - Output from the previous movement
|
# {previous_response} - Output from the previous movement
|
||||||
# {user_inputs} - Accumulated user inputs during workflow
|
# {user_inputs} - Accumulated user inputs during piece
|
||||||
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
||||||
|
|
||||||
name: magi
|
name: magi
|
||||||
|
|||||||
@ -1,18 +1,18 @@
|
|||||||
# Minimal TAKT Workflow
|
# Minimal TAKT Piece
|
||||||
# Implement -> Parallel Review (AI + Supervisor) -> Fix if needed -> Complete
|
# Implement -> Parallel Review (AI + Supervisor) -> Fix if needed -> Complete
|
||||||
# (Simplest configuration - no plan, no architect review)
|
# (Simplest configuration - no plan, no architect review)
|
||||||
#
|
#
|
||||||
# Template Variables (auto-injected):
|
# Template Variables (auto-injected):
|
||||||
# {iteration} - Workflow-wide turn count (total movements executed across all agents)
|
# {iteration} - Piece-wide turn count (total movements executed across all agents)
|
||||||
# {max_iterations} - Maximum iterations allowed for the workflow
|
# {max_iterations} - Maximum iterations allowed for the piece
|
||||||
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
||||||
# {task} - Original user request (auto-injected)
|
# {task} - Original user request (auto-injected)
|
||||||
# {previous_response} - Output from the previous movement (auto-injected)
|
# {previous_response} - Output from the previous movement (auto-injected)
|
||||||
# {user_inputs} - Accumulated user inputs during workflow (auto-injected)
|
# {user_inputs} - Accumulated user inputs during piece (auto-injected)
|
||||||
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
||||||
|
|
||||||
name: minimal
|
name: minimal
|
||||||
description: Minimal development workflow (implement -> parallel review -> fix if needed -> complete)
|
description: Minimal development piece (implement -> parallel review -> fix if needed -> complete)
|
||||||
|
|
||||||
max_iterations: 20
|
max_iterations: 20
|
||||||
|
|
||||||
@ -37,7 +37,7 @@ movements:
|
|||||||
permission_mode: edit
|
permission_mode: edit
|
||||||
instruction_template: |
|
instruction_template: |
|
||||||
Implement the task.
|
Implement the task.
|
||||||
Use only the Report Directory files shown in Workflow Context. Do not search or open reports outside that directory.
|
Use only the Report Directory files shown in Piece Context. Do not search or open reports outside that directory.
|
||||||
|
|
||||||
**Scope report format (create at implementation start):**
|
**Scope report format (create at implementation start):**
|
||||||
```markdown
|
```markdown
|
||||||
@ -154,7 +154,7 @@ movements:
|
|||||||
instruction_template: |
|
instruction_template: |
|
||||||
Run tests, verify the build, and perform final approval.
|
Run tests, verify the build, and perform final approval.
|
||||||
|
|
||||||
**Workflow Overall Review:**
|
**Piece Overall Review:**
|
||||||
1. Does the implementation meet the original request?
|
1. Does the implementation meet the original request?
|
||||||
2. Were AI Review issues addressed?
|
2. Were AI Review issues addressed?
|
||||||
3. Was the original task objective achieved?
|
3. Was the original task objective achieved?
|
||||||
|
|||||||
@ -1,5 +1,5 @@
|
|||||||
# Research Workflow
|
# Research Piece
|
||||||
# A workflow that autonomously executes research tasks
|
# A piece that autonomously executes research tasks
|
||||||
# Planner creates the plan, Digger executes, Supervisor verifies
|
# Planner creates the plan, Digger executes, Supervisor verifies
|
||||||
#
|
#
|
||||||
# Flow:
|
# Flow:
|
||||||
@ -7,16 +7,16 @@
|
|||||||
# -> plan (rejected: restart from planning)
|
# -> plan (rejected: restart from planning)
|
||||||
#
|
#
|
||||||
# Template Variables:
|
# Template Variables:
|
||||||
# {iteration} - Workflow-wide turn count (total movements executed across all agents)
|
# {iteration} - Piece-wide turn count (total movements executed across all agents)
|
||||||
# {max_iterations} - Maximum iterations allowed for the workflow
|
# {max_iterations} - Maximum iterations allowed for the piece
|
||||||
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
||||||
# {task} - Original user request
|
# {task} - Original user request
|
||||||
# {previous_response} - Output from the previous movement
|
# {previous_response} - Output from the previous movement
|
||||||
# {user_inputs} - Accumulated user inputs during workflow
|
# {user_inputs} - Accumulated user inputs during piece
|
||||||
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
||||||
|
|
||||||
name: research
|
name: research
|
||||||
description: Research workflow - autonomously executes research without asking questions
|
description: Research piece - autonomously executes research without asking questions
|
||||||
|
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
|
|
||||||
@ -30,8 +30,8 @@ movements:
|
|||||||
- WebSearch
|
- WebSearch
|
||||||
- WebFetch
|
- WebFetch
|
||||||
instruction_template: |
|
instruction_template: |
|
||||||
## Workflow Status
|
## Piece Status
|
||||||
- Iteration: {iteration}/{max_iterations} (workflow-wide)
|
- Iteration: {iteration}/{max_iterations} (piece-wide)
|
||||||
- Movement Iteration: {movement_iteration} (times this movement has run)
|
- Movement Iteration: {movement_iteration} (times this movement has run)
|
||||||
- Movement: plan
|
- Movement: plan
|
||||||
|
|
||||||
@ -67,8 +67,8 @@ movements:
|
|||||||
- WebSearch
|
- WebSearch
|
||||||
- WebFetch
|
- WebFetch
|
||||||
instruction_template: |
|
instruction_template: |
|
||||||
## Workflow Status
|
## Piece Status
|
||||||
- Iteration: {iteration}/{max_iterations} (workflow-wide)
|
- Iteration: {iteration}/{max_iterations} (piece-wide)
|
||||||
- Movement Iteration: {movement_iteration} (times this movement has run)
|
- Movement Iteration: {movement_iteration} (times this movement has run)
|
||||||
- Movement: dig
|
- Movement: dig
|
||||||
|
|
||||||
@ -109,8 +109,8 @@ movements:
|
|||||||
- WebSearch
|
- WebSearch
|
||||||
- WebFetch
|
- WebFetch
|
||||||
instruction_template: |
|
instruction_template: |
|
||||||
## Workflow Status
|
## Piece Status
|
||||||
- Iteration: {iteration}/{max_iterations} (workflow-wide)
|
- Iteration: {iteration}/{max_iterations} (piece-wide)
|
||||||
- Movement Iteration: {movement_iteration} (times this movement has run)
|
- Movement Iteration: {movement_iteration} (times this movement has run)
|
||||||
- Movement: supervise (research quality evaluation)
|
- Movement: supervise (research quality evaluation)
|
||||||
|
|
||||||
|
|||||||
@ -1,18 +1,18 @@
|
|||||||
# Review-Fix Minimal TAKT Workflow
|
# Review-Fix Minimal TAKT Piece
|
||||||
# Review -> Fix (if needed) -> Re-review -> Complete
|
# Review -> Fix (if needed) -> Re-review -> Complete
|
||||||
# (Starts with review, no implementation movement)
|
# (Starts with review, no implementation movement)
|
||||||
#
|
#
|
||||||
# Template Variables (auto-injected):
|
# Template Variables (auto-injected):
|
||||||
# {iteration} - Workflow-wide turn count (total movements executed across all agents)
|
# {iteration} - Piece-wide turn count (total movements executed across all agents)
|
||||||
# {max_iterations} - Maximum iterations allowed for the workflow
|
# {max_iterations} - Maximum iterations allowed for the piece
|
||||||
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
||||||
# {task} - Original user request (auto-injected)
|
# {task} - Original user request (auto-injected)
|
||||||
# {previous_response} - Output from the previous movement (auto-injected)
|
# {previous_response} - Output from the previous movement (auto-injected)
|
||||||
# {user_inputs} - Accumulated user inputs during workflow (auto-injected)
|
# {user_inputs} - Accumulated user inputs during piece (auto-injected)
|
||||||
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
||||||
|
|
||||||
name: review-fix-minimal
|
name: review-fix-minimal
|
||||||
description: Review and fix workflow for existing code (starts with review, no implementation)
|
description: Review and fix piece for existing code (starts with review, no implementation)
|
||||||
|
|
||||||
max_iterations: 20
|
max_iterations: 20
|
||||||
|
|
||||||
@ -37,7 +37,7 @@ movements:
|
|||||||
permission_mode: edit
|
permission_mode: edit
|
||||||
instruction_template: |
|
instruction_template: |
|
||||||
Implement the task.
|
Implement the task.
|
||||||
Use only the Report Directory files shown in Workflow Context. Do not search or open reports outside that directory.
|
Use only the Report Directory files shown in Piece Context. Do not search or open reports outside that directory.
|
||||||
|
|
||||||
**Scope report format (create at implementation start):**
|
**Scope report format (create at implementation start):**
|
||||||
```markdown
|
```markdown
|
||||||
@ -154,7 +154,7 @@ movements:
|
|||||||
instruction_template: |
|
instruction_template: |
|
||||||
Run tests, verify the build, and perform final approval.
|
Run tests, verify the build, and perform final approval.
|
||||||
|
|
||||||
**Workflow Overall Review:**
|
**Piece Overall Review:**
|
||||||
1. Does the implementation meet the original request?
|
1. Does the implementation meet the original request?
|
||||||
2. Were AI Review issues addressed?
|
2. Were AI Review issues addressed?
|
||||||
3. Was the original task objective achieved?
|
3. Was the original task objective achieved?
|
||||||
|
|||||||
@ -1,4 +1,4 @@
|
|||||||
# Review-Only Workflow
|
# Review-Only Piece
|
||||||
# Reviews code or PRs without making any edits
|
# Reviews code or PRs without making any edits
|
||||||
# Local: console output only. PR specified: posts inline comments + summary to PR
|
# Local: console output only. PR specified: posts inline comments + summary to PR
|
||||||
#
|
#
|
||||||
@ -11,7 +11,7 @@
|
|||||||
# All movements have edit: false (no file modifications)
|
# All movements have edit: false (no file modifications)
|
||||||
#
|
#
|
||||||
# Template Variables:
|
# Template Variables:
|
||||||
# {iteration} - Workflow-wide turn count
|
# {iteration} - Piece-wide turn count
|
||||||
# {max_iterations} - Maximum iterations allowed
|
# {max_iterations} - Maximum iterations allowed
|
||||||
# {movement_iteration} - Per-movement iteration count
|
# {movement_iteration} - Per-movement iteration count
|
||||||
# {task} - Original user request
|
# {task} - Original user request
|
||||||
@ -20,7 +20,7 @@
|
|||||||
# {report_dir} - Report directory name
|
# {report_dir} - Report directory name
|
||||||
|
|
||||||
name: review-only
|
name: review-only
|
||||||
description: Review-only workflow - reviews code without making edits
|
description: Review-only piece - reviews code without making edits
|
||||||
|
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
|
|
||||||
@ -54,7 +54,7 @@ movements:
|
|||||||
|
|
||||||
Analyze the review request and create a review plan.
|
Analyze the review request and create a review plan.
|
||||||
|
|
||||||
**This is a review-only workflow.** No code edits will be made.
|
**This is a review-only piece.** No code edits will be made.
|
||||||
Focus on:
|
Focus on:
|
||||||
1. Identify which files/modules to review
|
1. Identify which files/modules to review
|
||||||
2. Determine review focus areas (architecture, security, AI patterns, etc.)
|
2. Determine review focus areas (architecture, security, AI patterns, etc.)
|
||||||
@ -239,7 +239,7 @@ movements:
|
|||||||
## Review Results
|
## Review Results
|
||||||
{previous_response}
|
{previous_response}
|
||||||
|
|
||||||
**This is a review-only workflow.** Do NOT run tests or builds.
|
**This is a review-only piece.** Do NOT run tests or builds.
|
||||||
Your role is to synthesize the review results and produce a final summary.
|
Your role is to synthesize the review results and produce a final summary.
|
||||||
|
|
||||||
**Tasks:**
|
**Tasks:**
|
||||||
@ -326,5 +326,5 @@ movements:
|
|||||||
{Consolidated suggestions}
|
{Consolidated suggestions}
|
||||||
|
|
||||||
---
|
---
|
||||||
*Generated by [takt](https://github.com/toruticas/takt) review-only workflow*
|
*Generated by [takt](https://github.com/toruticas/takt) review-only piece*
|
||||||
```
|
```
|
||||||
|
|||||||
@ -1,9 +1,9 @@
|
|||||||
You are responsible for instruction creation in TAKT's interactive mode. Convert the conversation into a concrete task instruction for workflow execution.
|
You are responsible for instruction creation in TAKT's interactive mode. Convert the conversation into a concrete task instruction for piece execution.
|
||||||
|
|
||||||
## Your position
|
## Your position
|
||||||
- You: Interactive mode (task organization and instruction creation)
|
- You: Interactive mode (task organization and instruction creation)
|
||||||
- Next step: Your instruction will be passed to the workflow, where multiple AI agents execute sequentially
|
- Next step: Your instruction will be passed to the piece, where multiple AI agents execute sequentially
|
||||||
- Your output (instruction) becomes the input (task) for the entire workflow
|
- Your output (instruction) becomes the input (task) for the entire piece
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
- Output only the final task instruction (no preamble).
|
- Output only the final task instruction (no preamble).
|
||||||
@ -13,4 +13,4 @@ You are responsible for instruction creation in TAKT's interactive mode. Convert
|
|||||||
- Do not include constraints proposed or inferred by the assistant.
|
- Do not include constraints proposed or inferred by the assistant.
|
||||||
- Do NOT include assistant/system operational constraints (tool limits, execution prohibitions).
|
- Do NOT include assistant/system operational constraints (tool limits, execution prohibitions).
|
||||||
- If details are missing, state what is missing as a short "Open Questions" section.
|
- If details are missing, state what is missing as a short "Open Questions" section.
|
||||||
- Clearly specify the concrete work that the workflow will execute.
|
- Clearly specify the concrete work that the piece will execute.
|
||||||
|
|||||||
@ -1,23 +1,23 @@
|
|||||||
You are the interactive mode of TAKT (AI Agent Workflow Orchestration Tool).
|
You are the interactive mode of TAKT (AI Agent Piece Orchestration Tool).
|
||||||
|
|
||||||
## How TAKT works
|
## How TAKT works
|
||||||
1. **Interactive mode (your role)**: Talk with the user to clarify and organize the task, creating a concrete instruction document for workflow execution
|
1. **Interactive mode (your role)**: Talk with the user to clarify and organize the task, creating a concrete instruction document for piece execution
|
||||||
2. **Workflow execution**: Pass your instruction document to the workflow, where multiple AI agents execute sequentially (implementation, review, fixes, etc.)
|
2. **Piece execution**: Pass your instruction document to the piece, where multiple AI agents execute sequentially (implementation, review, fixes, etc.)
|
||||||
|
|
||||||
## Your role
|
## Your role
|
||||||
- Ask clarifying questions about ambiguous requirements
|
- Ask clarifying questions about ambiguous requirements
|
||||||
- Clarify and refine the user's request into a clear task instruction
|
- Clarify and refine the user's request into a clear task instruction
|
||||||
- Create concrete instructions for workflow agents to follow
|
- Create concrete instructions for piece agents to follow
|
||||||
- Summarize your understanding when appropriate
|
- Summarize your understanding when appropriate
|
||||||
- Keep responses concise and focused
|
- Keep responses concise and focused
|
||||||
|
|
||||||
## Critical: Understanding user intent
|
## Critical: Understanding user intent
|
||||||
**The user is asking YOU to create a task instruction for the WORKFLOW, not asking you to execute the task.**
|
**The user is asking YOU to create a task instruction for the PIECE, not asking you to execute the task.**
|
||||||
|
|
||||||
When the user says:
|
When the user says:
|
||||||
- "Review this code" → They want the WORKFLOW to review (you create the instruction)
|
- "Review this code" → They want the PIECE to review (you create the instruction)
|
||||||
- "Implement feature X" → They want the WORKFLOW to implement (you create the instruction)
|
- "Implement feature X" → They want the PIECE to implement (you create the instruction)
|
||||||
- "Fix this bug" → They want the WORKFLOW to fix (you create the instruction)
|
- "Fix this bug" → They want the PIECE to fix (you create the instruction)
|
||||||
|
|
||||||
These are NOT requests for YOU to investigate. Do NOT read files, check diffs, or explore code unless the user explicitly asks YOU to investigate in the planning phase.
|
These are NOT requests for YOU to investigate. Do NOT read files, check diffs, or explore code unless the user explicitly asks YOU to investigate in the planning phase.
|
||||||
|
|
||||||
@ -28,13 +28,13 @@ Only investigate when the user explicitly asks YOU (the planning assistant) to c
|
|||||||
- "What does this project do?" ✓
|
- "What does this project do?" ✓
|
||||||
|
|
||||||
## When investigation is NOT appropriate (most cases)
|
## When investigation is NOT appropriate (most cases)
|
||||||
Do NOT investigate when the user is describing a task for the workflow:
|
Do NOT investigate when the user is describing a task for the piece:
|
||||||
- "Review the changes" ✗ (workflow's job)
|
- "Review the changes" ✗ (piece's job)
|
||||||
- "Fix the code" ✗ (workflow's job)
|
- "Fix the code" ✗ (piece's job)
|
||||||
- "Implement X" ✗ (workflow's job)
|
- "Implement X" ✗ (piece's job)
|
||||||
|
|
||||||
## Strict constraints
|
## Strict constraints
|
||||||
- You are ONLY refining requirements. The actual work (implementation/investigation/review) is done by workflow agents.
|
- You are ONLY refining requirements. The actual work (implementation/investigation/review) is done by piece agents.
|
||||||
- Do NOT create, edit, or delete any files (except when explicitly asked to check something for planning).
|
- Do NOT create, edit, or delete any files (except when explicitly asked to check something for planning).
|
||||||
- Do NOT run build, test, install, or any commands that modify state.
|
- Do NOT run build, test, install, or any commands that modify state.
|
||||||
- Do NOT use Read/Glob/Grep/Bash proactively. Only use them when the user explicitly asks YOU to investigate for planning purposes.
|
- Do NOT use Read/Glob/Grep/Bash proactively. Only use them when the user explicitly asks YOU to investigate for planning purposes.
|
||||||
|
|||||||
@ -153,8 +153,8 @@ AIは自信を持って間違える——もっともらしく見えるが動か
|
|||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
// ❌ 悪い例 - 全呼び出し元が省略している
|
// ❌ 悪い例 - 全呼び出し元が省略している
|
||||||
function loadWorkflow(name: string, cwd = process.cwd()) { ... }
|
function loadPiece(name: string, cwd = process.cwd()) { ... }
|
||||||
// 全呼び出し元: loadWorkflow('default') ← cwd を渡していない
|
// 全呼び出し元: loadPiece('default') ← cwd を渡していない
|
||||||
// 問題: cwd の値がどこから来るか、呼び出し元を見ても分からない
|
// 問題: cwd の値がどこから来るか、呼び出し元を見ても分からない
|
||||||
// 修正: cwd を必須引数にし、呼び出し元で明示的に渡す
|
// 修正: cwd を必須引数にし、呼び出し元で明示的に渡す
|
||||||
|
|
||||||
|
|||||||
@ -56,7 +56,7 @@
|
|||||||
|
|
||||||
**誤検知を避けるために:**
|
**誤検知を避けるために:**
|
||||||
1. 「ハードコードされた値」を指摘する前に、**そのファイルがソースかレポートか確認**
|
1. 「ハードコードされた値」を指摘する前に、**そのファイルがソースかレポートか確認**
|
||||||
2. `.takt/reports/` 以下のファイルはワークフロー実行時に生成されるため、レビュー対象外
|
2. `.takt/reports/` 以下のファイルはピース実行時に生成されるため、レビュー対象外
|
||||||
3. git diff に含まれていても、生成ファイルは無視する
|
3. git diff に含まれていても、生成ファイルは無視する
|
||||||
|
|
||||||
## レビュー観点
|
## レビュー観点
|
||||||
@ -186,7 +186,7 @@ for (const transition of step.transitions) {
|
|||||||
export function matchesCondition(status: Status, condition: TransitionCondition): boolean {
|
export function matchesCondition(status: Status, condition: TransitionCondition): boolean {
|
||||||
|
|
||||||
// ✅ OK - 設計判断の理由(Why)
|
// ✅ OK - 設計判断の理由(Why)
|
||||||
// ユーザー中断はワークフロー定義のトランジションより優先する
|
// ユーザー中断はピース定義のトランジションより優先する
|
||||||
if (status === 'interrupted') {
|
if (status === 'interrupted') {
|
||||||
return ABORT_STEP;
|
return ABORT_STEP;
|
||||||
}
|
}
|
||||||
@ -481,7 +481,7 @@ function createOrder(data: OrderData) {
|
|||||||
- ドキュメントのスキーマ説明が更新されているか
|
- ドキュメントのスキーマ説明が更新されているか
|
||||||
- 既存の設定ファイルが新しいスキーマと整合するか
|
- 既存の設定ファイルが新しいスキーマと整合するか
|
||||||
|
|
||||||
3. ワークフロー定義を変更した場合:
|
3. ピース定義を変更した場合:
|
||||||
- ムーブメント種別(通常/parallel)に応じた正しいフィールドが使われているか
|
- ムーブメント種別(通常/parallel)に応じた正しいフィールドが使われているか
|
||||||
- 不要なフィールド(parallelサブムーブメントのnext等)が残っていないか
|
- 不要なフィールド(parallelサブムーブメントのnext等)が残っていないか
|
||||||
|
|
||||||
@ -515,13 +515,13 @@ function createOrder(data: OrderData) {
|
|||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
// ❌ 配線漏れ: projectCwd を受け取る口がない
|
// ❌ 配線漏れ: projectCwd を受け取る口がない
|
||||||
export async function executeWorkflow(config, cwd, task) {
|
export async function executePiece(config, cwd, task) {
|
||||||
const engine = new WorkflowEngine(config, cwd, task); // options なし
|
const engine = new PieceEngine(config, cwd, task); // options なし
|
||||||
}
|
}
|
||||||
|
|
||||||
// ✅ 配線済み: projectCwd を渡せる
|
// ✅ 配線済み: projectCwd を渡せる
|
||||||
export async function executeWorkflow(config, cwd, task, options?) {
|
export async function executePiece(config, cwd, task, options?) {
|
||||||
const engine = new WorkflowEngine(config, cwd, task, options);
|
const engine = new PieceEngine(config, cwd, task, options);
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@ -20,7 +20,7 @@ Architectが「正しく作られているか(Verification)」を確認す
|
|||||||
|
|
||||||
## Human-in-the-Loop チェックポイント
|
## Human-in-the-Loop チェックポイント
|
||||||
|
|
||||||
あなたは自動化されたワークフローにおける**人間の代理**です。承認前に以下を確認してください。
|
あなたは自動化されたピースにおける**人間の代理**です。承認前に以下を確認してください。
|
||||||
|
|
||||||
**人間のレビュアーなら何をチェックするか自問する:**
|
**人間のレビュアーなら何をチェックするか自問する:**
|
||||||
- これは本当にユーザーの問題を解決しているか?
|
- これは本当にユーザーの問題を解決しているか?
|
||||||
@ -92,16 +92,16 @@ Architectが「正しく作られているか(Verification)」を確認す
|
|||||||
|
|
||||||
**仕様違反を見つけたら REJECT。** 仕様は「たぶん合ってる」ではなく、実際に読んで突合する。
|
**仕様違反を見つけたら REJECT。** 仕様は「たぶん合ってる」ではなく、実際に読んで突合する。
|
||||||
|
|
||||||
### 7. ワークフロー全体の見直し
|
### 7. ピース全体の見直し
|
||||||
|
|
||||||
**レポートディレクトリ内の全レポートを確認し、ワークフロー全体の整合性をチェックする。**
|
**レポートディレクトリ内の全レポートを確認し、ピース全体の整合性をチェックする。**
|
||||||
|
|
||||||
確認すること:
|
確認すること:
|
||||||
- 計画(00-plan.md)と実装結果が一致しているか
|
- 計画(00-plan.md)と実装結果が一致しているか
|
||||||
- 各レビュームーブメントの指摘が適切に対応されているか
|
- 各レビュームーブメントの指摘が適切に対応されているか
|
||||||
- タスクの本来の目的が達成されているか
|
- タスクの本来の目的が達成されているか
|
||||||
|
|
||||||
**ワークフロー全体の問題:**
|
**ピース全体の問題:**
|
||||||
| 問題 | 対応 |
|
| 問題 | 対応 |
|
||||||
|------|------|
|
|------|------|
|
||||||
| 計画と実装の乖離 | REJECT - 計画の見直しまたは実装の修正を指示 |
|
| 計画と実装の乖離 | REJECT - 計画の見直しまたは実装の修正を指示 |
|
||||||
|
|||||||
@ -7,8 +7,8 @@ language: ja
|
|||||||
# 信頼済みディレクトリ - これらのディレクトリ内のプロジェクトは確認プロンプトをスキップします
|
# 信頼済みディレクトリ - これらのディレクトリ内のプロジェクトは確認プロンプトをスキップします
|
||||||
trusted_directories: []
|
trusted_directories: []
|
||||||
|
|
||||||
# デフォルトワークフロー - ワークフローが指定されていない場合に使用します
|
# デフォルトピース - ピースが指定されていない場合に使用します
|
||||||
default_workflow: default
|
default_piece: default
|
||||||
|
|
||||||
# ログレベル: debug, info, warn, error
|
# ログレベル: debug, info, warn, error
|
||||||
log_level: info
|
log_level: info
|
||||||
@ -16,8 +16,8 @@ log_level: info
|
|||||||
# プロバイダー: claude または codex
|
# プロバイダー: claude または codex
|
||||||
provider: claude
|
provider: claude
|
||||||
|
|
||||||
# ビルトインワークフローの読み込み (resources/global/{lang}/workflows)
|
# ビルトインピースの読み込み (resources/global/{lang}/pieces)
|
||||||
# enable_builtin_workflows: true
|
# enable_builtin_pieces: true
|
||||||
|
|
||||||
# デフォルトモデル (オプション)
|
# デフォルトモデル (オプション)
|
||||||
# Claude: opus, sonnet, haiku, opusplan, default, またはフルモデル名
|
# Claude: opus, sonnet, haiku, opusplan, default, またはフルモデル名
|
||||||
|
|||||||
@ -1,11 +1,11 @@
|
|||||||
workflow_categories:
|
piece_categories:
|
||||||
"🚀 クイックスタート":
|
"🚀 クイックスタート":
|
||||||
workflows:
|
pieces:
|
||||||
- default
|
- default
|
||||||
- minimal
|
- minimal
|
||||||
|
|
||||||
"🔍 レビュー&修正":
|
"🔍 レビュー&修正":
|
||||||
workflows:
|
pieces:
|
||||||
- review-fix-minimal
|
- review-fix-minimal
|
||||||
|
|
||||||
"🎨 フロントエンド":
|
"🎨 フロントエンド":
|
||||||
@ -15,12 +15,12 @@ workflow_categories:
|
|||||||
{}
|
{}
|
||||||
|
|
||||||
"🔧 フルスタック":
|
"🔧 フルスタック":
|
||||||
workflows:
|
pieces:
|
||||||
- expert
|
- expert
|
||||||
- expert-cqrs
|
- expert-cqrs
|
||||||
|
|
||||||
"その他":
|
"その他":
|
||||||
workflows:
|
pieces:
|
||||||
- research
|
- research
|
||||||
- magi
|
- magi
|
||||||
- review-only
|
- review-only
|
||||||
|
|||||||
@ -1,17 +1,17 @@
|
|||||||
# Default TAKT Workflow
|
# Default TAKT Piece
|
||||||
# Plan -> Architect -> Implement -> AI Review -> Reviewers (parallel: Architect + Security) -> Supervisor Approval
|
# Plan -> Architect -> Implement -> AI Review -> Reviewers (parallel: Architect + Security) -> Supervisor Approval
|
||||||
#
|
#
|
||||||
# Template Variables (auto-injected by buildInstruction):
|
# Template Variables (auto-injected by buildInstruction):
|
||||||
# {iteration} - Workflow-wide turn count (total movements executed across all agents)
|
# {iteration} - Piece-wide turn count (total movements executed across all agents)
|
||||||
# {max_iterations} - Maximum iterations allowed for the workflow
|
# {max_iterations} - Maximum iterations allowed for the piece
|
||||||
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
||||||
# {task} - Original user request
|
# {task} - Original user request
|
||||||
# {previous_response} - Output from the previous movement
|
# {previous_response} - Output from the previous movement
|
||||||
# {user_inputs} - Accumulated user inputs during workflow
|
# {user_inputs} - Accumulated user inputs during piece
|
||||||
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
||||||
|
|
||||||
name: default
|
name: default
|
||||||
description: Standard development workflow with planning and specialized reviews
|
description: Standard development piece with planning and specialized reviews
|
||||||
|
|
||||||
max_iterations: 30
|
max_iterations: 30
|
||||||
|
|
||||||
@ -174,7 +174,7 @@ movements:
|
|||||||
- 計画: {report:00-plan.md}
|
- 計画: {report:00-plan.md}
|
||||||
- 設計: {report:01-architecture.md}(存在する場合)
|
- 設計: {report:01-architecture.md}(存在する場合)
|
||||||
|
|
||||||
Workflow Contextに示されたReport Directory内のファイルのみ参照してください。他のレポートディレクトリは検索/参照しないでください。
|
Piece Contextに示されたReport Directory内のファイルのみ参照してください。他のレポートディレクトリは検索/参照しないでください。
|
||||||
|
|
||||||
**重要:** 設計判断はせず、architectムーブメントで決定された設計に従ってください。
|
**重要:** 設計判断はせず、architectムーブメントで決定された設計に従ってください。
|
||||||
不明点や設計の変更が必要な場合は報告してください。
|
不明点や設計の変更が必要な場合は報告してください。
|
||||||
@ -512,7 +512,7 @@ movements:
|
|||||||
instruction_template: |
|
instruction_template: |
|
||||||
テスト実行、ビルド確認、最終承認を行ってください。
|
テスト実行、ビルド確認、最終承認を行ってください。
|
||||||
|
|
||||||
**ワークフロー全体の確認:**
|
**ピース全体の確認:**
|
||||||
1. 計画({report:00-plan.md})と設計({report:01-architecture.md}、存在する場合)に従った実装か
|
1. 計画({report:00-plan.md})と設計({report:01-architecture.md}、存在する場合)に従った実装か
|
||||||
2. 各レビュームーブメントの指摘が対応されているか
|
2. 各レビュームーブメントの指摘が対応されているか
|
||||||
3. 元のタスク目的が達成されているか
|
3. 元のタスク目的が達成されているか
|
||||||
|
|||||||
@ -1,5 +1,5 @@
|
|||||||
# Expert Review Workflow
|
# Expert Review Piece
|
||||||
# CQRS+ES、フロントエンド、セキュリティ、QAの専門家によるレビューワークフロー
|
# CQRS+ES、フロントエンド、セキュリティ、QAの専門家によるレビューピース
|
||||||
#
|
#
|
||||||
# フロー:
|
# フロー:
|
||||||
# plan -> implement -> ai_review -> reviewers (parallel) -> supervise -> COMPLETE
|
# plan -> implement -> ai_review -> reviewers (parallel) -> supervise -> COMPLETE
|
||||||
@ -9,19 +9,19 @@
|
|||||||
# └─ qa-review
|
# └─ qa-review
|
||||||
# any("needs_fix") → fix → reviewers
|
# any("needs_fix") → fix → reviewers
|
||||||
#
|
#
|
||||||
# ボイラープレートセクション(Workflow Context, User Request, Previous Response,
|
# ボイラープレートセクション(Piece Context, User Request, Previous Response,
|
||||||
# Additional User Inputs, Instructions heading)はbuildInstruction()が自動挿入。
|
# Additional User Inputs, Instructions heading)はbuildInstruction()が自動挿入。
|
||||||
# instruction_templateにはムーブメント固有の内容のみ記述。
|
# instruction_templateにはムーブメント固有の内容のみ記述。
|
||||||
#
|
#
|
||||||
# テンプレート変数(instruction_template内で使用可能):
|
# テンプレート変数(instruction_template内で使用可能):
|
||||||
# {iteration} - ワークフロー全体のターン数(全エージェントで実行されたムーブメントの合計)
|
# {iteration} - ピース全体のターン数(全エージェントで実行されたムーブメントの合計)
|
||||||
# {max_iterations} - ワークフローの最大イテレーション数
|
# {max_iterations} - ピースの最大イテレーション数
|
||||||
# {movement_iteration} - ムーブメントごとのイテレーション数(このムーブメントが何回実行されたか)
|
# {movement_iteration} - ムーブメントごとのイテレーション数(このムーブメントが何回実行されたか)
|
||||||
# {previous_response} - 前のムーブメントの出力(pass_previous_response: true の場合のみ)
|
# {previous_response} - 前のムーブメントの出力(pass_previous_response: true の場合のみ)
|
||||||
# {report_dir} - レポートディレクトリ名(例: "20250126-143052-task-summary")
|
# {report_dir} - レポートディレクトリ名(例: "20250126-143052-task-summary")
|
||||||
#
|
#
|
||||||
# ムーブメントレベルフィールド:
|
# ムーブメントレベルフィールド:
|
||||||
# report: - ムーブメントのレポートファイル(Workflow ContextにReport File/Filesとして自動挿入)
|
# report: - ムーブメントのレポートファイル(Piece ContextにReport File/Filesとして自動挿入)
|
||||||
# 単一: report: 00-plan.md
|
# 単一: report: 00-plan.md
|
||||||
# 複数: report:
|
# 複数: report:
|
||||||
# - Scope: 01-coder-scope.md
|
# - Scope: 01-coder-scope.md
|
||||||
@ -110,7 +110,7 @@ movements:
|
|||||||
instruction_template: |
|
instruction_template: |
|
||||||
planムーブメントで立てた計画に従って実装してください。
|
planムーブメントで立てた計画に従って実装してください。
|
||||||
計画レポート({report:00-plan.md})を参照し、実装を進めてください。
|
計画レポート({report:00-plan.md})を参照し、実装を進めてください。
|
||||||
Workflow Contextに示されたReport Directory内のファイルのみ参照してください。他のレポートディレクトリは検索/参照しないでください。
|
Piece Contextに示されたReport Directory内のファイルのみ参照してください。他のレポートディレクトリは検索/参照しないでください。
|
||||||
|
|
||||||
**Scopeレポートフォーマット(実装開始時に作成):**
|
**Scopeレポートフォーマット(実装開始時に作成):**
|
||||||
```markdown
|
```markdown
|
||||||
@ -558,7 +558,7 @@ movements:
|
|||||||
|
|
||||||
テスト実行、ビルド確認、最終承認を行ってください。
|
テスト実行、ビルド確認、最終承認を行ってください。
|
||||||
|
|
||||||
**ワークフロー全体の確認:**
|
**ピース全体の確認:**
|
||||||
1. 計画({report:00-plan.md})と実装結果が一致しているか
|
1. 計画({report:00-plan.md})と実装結果が一致しているか
|
||||||
2. 各レビュームーブメントの指摘が対応されているか
|
2. 各レビュームーブメントの指摘が対応されているか
|
||||||
3. 元のタスク目的が達成されているか
|
3. 元のタスク目的が達成されているか
|
||||||
|
|||||||
@ -1,5 +1,5 @@
|
|||||||
# Expert Review Workflow
|
# Expert Review Piece
|
||||||
# アーキテクチャ、フロントエンド、セキュリティ、QAの専門家によるレビューワークフロー
|
# アーキテクチャ、フロントエンド、セキュリティ、QAの専門家によるレビューピース
|
||||||
#
|
#
|
||||||
# フロー:
|
# フロー:
|
||||||
# plan -> implement -> ai_review -> reviewers (parallel) -> supervise -> COMPLETE
|
# plan -> implement -> ai_review -> reviewers (parallel) -> supervise -> COMPLETE
|
||||||
@ -10,12 +10,12 @@
|
|||||||
# any("needs_fix") → fix → reviewers
|
# any("needs_fix") → fix → reviewers
|
||||||
#
|
#
|
||||||
# テンプレート変数:
|
# テンプレート変数:
|
||||||
# {iteration} - ワークフロー全体のターン数(全エージェントで実行されたムーブメントの合計)
|
# {iteration} - ピース全体のターン数(全エージェントで実行されたムーブメントの合計)
|
||||||
# {max_iterations} - ワークフローの最大イテレーション数
|
# {max_iterations} - ピースの最大イテレーション数
|
||||||
# {movement_iteration} - ムーブメントごとのイテレーション数(このムーブメントが何回実行されたか)
|
# {movement_iteration} - ムーブメントごとのイテレーション数(このムーブメントが何回実行されたか)
|
||||||
# {task} - 元のユーザー要求
|
# {task} - 元のユーザー要求
|
||||||
# {previous_response} - 前のムーブメントの出力
|
# {previous_response} - 前のムーブメントの出力
|
||||||
# {user_inputs} - ワークフロー中に蓄積されたユーザー入力
|
# {user_inputs} - ピース中に蓄積されたユーザー入力
|
||||||
# {report_dir} - レポートディレクトリ名(例: "20250126-143052-task-summary")
|
# {report_dir} - レポートディレクトリ名(例: "20250126-143052-task-summary")
|
||||||
|
|
||||||
name: expert
|
name: expert
|
||||||
@ -101,7 +101,7 @@ movements:
|
|||||||
instruction_template: |
|
instruction_template: |
|
||||||
planムーブメントで立てた計画に従って実装してください。
|
planムーブメントで立てた計画に従って実装してください。
|
||||||
計画レポート({report:00-plan.md})を参照し、実装を進めてください。
|
計画レポート({report:00-plan.md})を参照し、実装を進めてください。
|
||||||
Workflow Contextに示されたReport Directory内のファイルのみ参照してください。他のレポートディレクトリは検索/参照しないでください。
|
Piece Contextに示されたReport Directory内のファイルのみ参照してください。他のレポートディレクトリは検索/参照しないでください。
|
||||||
|
|
||||||
**Scopeレポートフォーマット(実装開始時に作成):**
|
**Scopeレポートフォーマット(実装開始時に作成):**
|
||||||
```markdown
|
```markdown
|
||||||
@ -549,7 +549,7 @@ movements:
|
|||||||
|
|
||||||
テスト実行、ビルド確認、最終承認を行ってください。
|
テスト実行、ビルド確認、最終承認を行ってください。
|
||||||
|
|
||||||
**ワークフロー全体の確認:**
|
**ピース全体の確認:**
|
||||||
1. 計画({report:00-plan.md})と実装結果が一致しているか
|
1. 計画({report:00-plan.md})と実装結果が一致しているか
|
||||||
2. 各レビュームーブメントの指摘が対応されているか
|
2. 各レビュームーブメントの指摘が対応されているか
|
||||||
3. 元のタスク目的が達成されているか
|
3. 元のタスク目的が達成されているか
|
||||||
|
|||||||
@ -1,14 +1,14 @@
|
|||||||
# MAGI System Workflow
|
# MAGI System Piece
|
||||||
# エヴァンゲリオンのMAGIシステムを模した合議制ワークフロー
|
# エヴァンゲリオンのMAGIシステムを模した合議制ピース
|
||||||
# 3つの人格(科学者・育成者・実務家)が異なる観点から分析・投票する
|
# 3つの人格(科学者・育成者・実務家)が異なる観点から分析・投票する
|
||||||
#
|
#
|
||||||
# テンプレート変数:
|
# テンプレート変数:
|
||||||
# {iteration} - ワークフロー全体のターン数(全エージェントで実行されたムーブメントの合計)
|
# {iteration} - ピース全体のターン数(全エージェントで実行されたムーブメントの合計)
|
||||||
# {max_iterations} - ワークフローの最大イテレーション数
|
# {max_iterations} - ピースの最大イテレーション数
|
||||||
# {movement_iteration} - ムーブメントごとのイテレーション数(このムーブメントが何回実行されたか)
|
# {movement_iteration} - ムーブメントごとのイテレーション数(このムーブメントが何回実行されたか)
|
||||||
# {task} - 元のユーザー要求
|
# {task} - 元のユーザー要求
|
||||||
# {previous_response} - 前のムーブメントの出力
|
# {previous_response} - 前のムーブメントの出力
|
||||||
# {user_inputs} - ワークフロー中に蓄積されたユーザー入力
|
# {user_inputs} - ピース中に蓄積されたユーザー入力
|
||||||
# {report_dir} - レポートディレクトリ名(例: "20250126-143052-task-summary")
|
# {report_dir} - レポートディレクトリ名(例: "20250126-143052-task-summary")
|
||||||
|
|
||||||
name: magi
|
name: magi
|
||||||
|
|||||||
@ -1,18 +1,18 @@
|
|||||||
# Simple TAKT Workflow
|
# Simple TAKT Piece
|
||||||
# Implement -> AI Review -> Supervisor Approval
|
# Implement -> AI Review -> Supervisor Approval
|
||||||
# (最もシンプルな構成 - plan, architect review, fix ムーブメントなし)
|
# (最もシンプルな構成 - plan, architect review, fix ムーブメントなし)
|
||||||
#
|
#
|
||||||
# Template Variables (auto-injected):
|
# Template Variables (auto-injected):
|
||||||
# {iteration} - Workflow-wide turn count (total movements executed across all agents)
|
# {iteration} - Piece-wide turn count (total movements executed across all agents)
|
||||||
# {max_iterations} - Maximum iterations allowed for the workflow
|
# {max_iterations} - Maximum iterations allowed for the piece
|
||||||
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
||||||
# {task} - Original user request (auto-injected)
|
# {task} - Original user request (auto-injected)
|
||||||
# {previous_response} - Output from the previous movement (auto-injected)
|
# {previous_response} - Output from the previous movement (auto-injected)
|
||||||
# {user_inputs} - Accumulated user inputs during workflow (auto-injected)
|
# {user_inputs} - Accumulated user inputs during piece (auto-injected)
|
||||||
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
||||||
|
|
||||||
name: minimal
|
name: minimal
|
||||||
description: Minimal development workflow (implement -> parallel review -> fix if needed -> complete)
|
description: Minimal development piece (implement -> parallel review -> fix if needed -> complete)
|
||||||
|
|
||||||
max_iterations: 20
|
max_iterations: 20
|
||||||
|
|
||||||
@ -37,7 +37,7 @@ movements:
|
|||||||
permission_mode: edit
|
permission_mode: edit
|
||||||
instruction_template: |
|
instruction_template: |
|
||||||
タスクを実装してください。
|
タスクを実装してください。
|
||||||
Workflow Contextに示されたReport Directory内のファイルのみ参照してください。他のレポートディレクトリは検索/参照しないでください。
|
Piece Contextに示されたReport Directory内のファイルのみ参照してください。他のレポートディレクトリは検索/参照しないでください。
|
||||||
|
|
||||||
**Scopeレポートフォーマット(実装開始時に作成):**
|
**Scopeレポートフォーマット(実装開始時に作成):**
|
||||||
```markdown
|
```markdown
|
||||||
@ -154,7 +154,7 @@ movements:
|
|||||||
instruction_template: |
|
instruction_template: |
|
||||||
テスト実行、ビルド確認、最終承認を行ってください。
|
テスト実行、ビルド確認、最終承認を行ってください。
|
||||||
|
|
||||||
**ワークフロー全体の確認:**
|
**ピース全体の確認:**
|
||||||
1. 実装結果が元の要求を満たしているか
|
1. 実装結果が元の要求を満たしているか
|
||||||
2. AI Reviewの指摘が対応されているか
|
2. AI Reviewの指摘が対応されているか
|
||||||
3. 元のタスク目的が達成されているか
|
3. 元のタスク目的が達成されているか
|
||||||
|
|||||||
@ -1,5 +1,5 @@
|
|||||||
# Research Workflow
|
# Research Piece
|
||||||
# 調査タスクを自律的に実行するワークフロー
|
# 調査タスクを自律的に実行するピース
|
||||||
# Planner が計画を立て、Digger が実行し、Supervisor が確認する
|
# Planner が計画を立て、Digger が実行し、Supervisor が確認する
|
||||||
#
|
#
|
||||||
# フロー:
|
# フロー:
|
||||||
@ -7,16 +7,16 @@
|
|||||||
# -> plan (rejected: 計画からやり直し)
|
# -> plan (rejected: 計画からやり直し)
|
||||||
#
|
#
|
||||||
# テンプレート変数:
|
# テンプレート変数:
|
||||||
# {iteration} - ワークフロー全体のターン数(全エージェントで実行されたムーブメントの合計)
|
# {iteration} - ピース全体のターン数(全エージェントで実行されたムーブメントの合計)
|
||||||
# {max_iterations} - ワークフローの最大イテレーション数
|
# {max_iterations} - ピースの最大イテレーション数
|
||||||
# {movement_iteration} - ムーブメントごとのイテレーション数(このムーブメントが何回実行されたか)
|
# {movement_iteration} - ムーブメントごとのイテレーション数(このムーブメントが何回実行されたか)
|
||||||
# {task} - 元のユーザー要求
|
# {task} - 元のユーザー要求
|
||||||
# {previous_response} - 前のムーブメントの出力
|
# {previous_response} - 前のムーブメントの出力
|
||||||
# {user_inputs} - ワークフロー中に蓄積されたユーザー入力
|
# {user_inputs} - ピース中に蓄積されたユーザー入力
|
||||||
# {report_dir} - レポートディレクトリ名(例: "20250126-143052-task-summary")
|
# {report_dir} - レポートディレクトリ名(例: "20250126-143052-task-summary")
|
||||||
|
|
||||||
name: research
|
name: research
|
||||||
description: 調査ワークフロー - 質問せずに自律的に調査を実行
|
description: 調査ピース - 質問せずに自律的に調査を実行
|
||||||
|
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
|
|
||||||
@ -30,8 +30,8 @@ movements:
|
|||||||
- WebSearch
|
- WebSearch
|
||||||
- WebFetch
|
- WebFetch
|
||||||
instruction_template: |
|
instruction_template: |
|
||||||
## ワークフロー状況
|
## ピース状況
|
||||||
- イテレーション: {iteration}/{max_iterations}(ワークフロー全体)
|
- イテレーション: {iteration}/{max_iterations}(ピース全体)
|
||||||
- ムーブメント実行回数: {movement_iteration}(このムーブメントの実行回数)
|
- ムーブメント実行回数: {movement_iteration}(このムーブメントの実行回数)
|
||||||
- ムーブメント: plan
|
- ムーブメント: plan
|
||||||
|
|
||||||
@ -67,8 +67,8 @@ movements:
|
|||||||
- WebSearch
|
- WebSearch
|
||||||
- WebFetch
|
- WebFetch
|
||||||
instruction_template: |
|
instruction_template: |
|
||||||
## ワークフロー状況
|
## ピース状況
|
||||||
- イテレーション: {iteration}/{max_iterations}(ワークフロー全体)
|
- イテレーション: {iteration}/{max_iterations}(ピース全体)
|
||||||
- ムーブメント実行回数: {movement_iteration}(このムーブメントの実行回数)
|
- ムーブメント実行回数: {movement_iteration}(このムーブメントの実行回数)
|
||||||
- ムーブメント: dig
|
- ムーブメント: dig
|
||||||
|
|
||||||
@ -109,8 +109,8 @@ movements:
|
|||||||
- WebSearch
|
- WebSearch
|
||||||
- WebFetch
|
- WebFetch
|
||||||
instruction_template: |
|
instruction_template: |
|
||||||
## ワークフロー状況
|
## ピース状況
|
||||||
- イテレーション: {iteration}/{max_iterations}(ワークフロー全体)
|
- イテレーション: {iteration}/{max_iterations}(ピース全体)
|
||||||
- ムーブメント実行回数: {movement_iteration}(このムーブメントの実行回数)
|
- ムーブメント実行回数: {movement_iteration}(このムーブメントの実行回数)
|
||||||
- ムーブメント: supervise (調査品質評価)
|
- ムーブメント: supervise (調査品質評価)
|
||||||
|
|
||||||
|
|||||||
@ -1,18 +1,18 @@
|
|||||||
# Review-Fix Minimal TAKT Workflow
|
# Review-Fix Minimal TAKT Piece
|
||||||
# Review -> Fix (if needed) -> Re-review -> Complete
|
# Review -> Fix (if needed) -> Re-review -> Complete
|
||||||
# (レビューから開始、実装ムーブメントなし)
|
# (レビューから開始、実装ムーブメントなし)
|
||||||
#
|
#
|
||||||
# Template Variables (auto-injected):
|
# Template Variables (auto-injected):
|
||||||
# {iteration} - Workflow-wide turn count (total movements executed across all agents)
|
# {iteration} - Piece-wide turn count (total movements executed across all agents)
|
||||||
# {max_iterations} - Maximum iterations allowed for the workflow
|
# {max_iterations} - Maximum iterations allowed for the piece
|
||||||
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
# {movement_iteration} - Per-movement iteration count (how many times THIS movement has been executed)
|
||||||
# {task} - Original user request (auto-injected)
|
# {task} - Original user request (auto-injected)
|
||||||
# {previous_response} - Output from the previous movement (auto-injected)
|
# {previous_response} - Output from the previous movement (auto-injected)
|
||||||
# {user_inputs} - Accumulated user inputs during workflow (auto-injected)
|
# {user_inputs} - Accumulated user inputs during piece (auto-injected)
|
||||||
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
# {report_dir} - Report directory name (e.g., "20250126-143052-task-summary")
|
||||||
|
|
||||||
name: review-fix-minimal
|
name: review-fix-minimal
|
||||||
description: 既存コードのレビューと修正ワークフロー(レビュー開始、実装なし)
|
description: 既存コードのレビューと修正ピース(レビュー開始、実装なし)
|
||||||
|
|
||||||
max_iterations: 20
|
max_iterations: 20
|
||||||
|
|
||||||
@ -37,7 +37,7 @@ movements:
|
|||||||
permission_mode: edit
|
permission_mode: edit
|
||||||
instruction_template: |
|
instruction_template: |
|
||||||
タスクを実装してください。
|
タスクを実装してください。
|
||||||
Workflow Contextに示されたReport Directory内のファイルのみ参照してください。他のレポートディレクトリは検索/参照しないでください。
|
Piece Contextに示されたReport Directory内のファイルのみ参照してください。他のレポートディレクトリは検索/参照しないでください。
|
||||||
|
|
||||||
**Scopeレポートフォーマット(実装開始時に作成):**
|
**Scopeレポートフォーマット(実装開始時に作成):**
|
||||||
```markdown
|
```markdown
|
||||||
@ -154,7 +154,7 @@ movements:
|
|||||||
instruction_template: |
|
instruction_template: |
|
||||||
テスト実行、ビルド確認、最終承認を行ってください。
|
テスト実行、ビルド確認、最終承認を行ってください。
|
||||||
|
|
||||||
**ワークフロー全体の確認:**
|
**ピース全体の確認:**
|
||||||
1. 実装結果が元の要求を満たしているか
|
1. 実装結果が元の要求を満たしているか
|
||||||
2. AI Reviewの指摘が対応されているか
|
2. AI Reviewの指摘が対応されているか
|
||||||
3. 元のタスク目的が達成されているか
|
3. 元のタスク目的が達成されているか
|
||||||
|
|||||||
@ -1,4 +1,4 @@
|
|||||||
# レビュー専用ワークフロー
|
# レビュー専用ピース
|
||||||
# コードやPRをレビューするだけで編集は行わない
|
# コードやPRをレビューするだけで編集は行わない
|
||||||
# ローカル: コンソール出力のみ。PR指定時: PRにインラインコメント+サマリを投稿
|
# ローカル: コンソール出力のみ。PR指定時: PRにインラインコメント+サマリを投稿
|
||||||
#
|
#
|
||||||
@ -11,7 +11,7 @@
|
|||||||
# 全ムーブメント edit: false(ファイル変更なし)
|
# 全ムーブメント edit: false(ファイル変更なし)
|
||||||
#
|
#
|
||||||
# テンプレート変数:
|
# テンプレート変数:
|
||||||
# {iteration} - ワークフロー全体のターン数
|
# {iteration} - ピース全体のターン数
|
||||||
# {max_iterations} - 最大イテレーション数
|
# {max_iterations} - 最大イテレーション数
|
||||||
# {movement_iteration} - ムーブメントごとのイテレーション数
|
# {movement_iteration} - ムーブメントごとのイテレーション数
|
||||||
# {task} - 元のユーザー要求
|
# {task} - 元のユーザー要求
|
||||||
@ -20,7 +20,7 @@
|
|||||||
# {report_dir} - レポートディレクトリ名
|
# {report_dir} - レポートディレクトリ名
|
||||||
|
|
||||||
name: review-only
|
name: review-only
|
||||||
description: レビュー専用ワークフロー - コードをレビューするだけで編集は行わない
|
description: レビュー専用ピース - コードをレビューするだけで編集は行わない
|
||||||
|
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
|
|
||||||
@ -54,7 +54,7 @@ movements:
|
|||||||
|
|
||||||
レビュー依頼を分析し、レビュー方針を立ててください。
|
レビュー依頼を分析し、レビュー方針を立ててください。
|
||||||
|
|
||||||
**これはレビュー専用ワークフローです。** コード編集は行いません。
|
**これはレビュー専用ピースです。** コード編集は行いません。
|
||||||
以下に集中してください:
|
以下に集中してください:
|
||||||
1. レビュー対象のファイル/モジュールを特定
|
1. レビュー対象のファイル/モジュールを特定
|
||||||
2. レビューの重点領域を決定(アーキテクチャ、セキュリティ、AIパターン等)
|
2. レビューの重点領域を決定(アーキテクチャ、セキュリティ、AIパターン等)
|
||||||
@ -239,7 +239,7 @@ movements:
|
|||||||
## レビュー結果
|
## レビュー結果
|
||||||
{previous_response}
|
{previous_response}
|
||||||
|
|
||||||
**これはレビュー専用ワークフローです。** テスト実行やビルドは行わないでください。
|
**これはレビュー専用ピースです。** テスト実行やビルドは行わないでください。
|
||||||
レビュー結果を統合し、最終サマリーを作成する役割です。
|
レビュー結果を統合し、最終サマリーを作成する役割です。
|
||||||
|
|
||||||
**やること:**
|
**やること:**
|
||||||
@ -327,5 +327,5 @@ movements:
|
|||||||
{統合された提案}
|
{統合された提案}
|
||||||
|
|
||||||
---
|
---
|
||||||
*[takt](https://github.com/toruticas/takt) review-only ワークフローで生成*
|
*[takt](https://github.com/toruticas/takt) review-only ピースで生成*
|
||||||
```
|
```
|
||||||
|
|||||||
@ -1,9 +1,9 @@
|
|||||||
あなたはTAKTの対話モードでの指示書作成を担当しています。これまでの会話内容を、ワークフロー実行用の具体的なタスク指示書に変換してください。
|
あなたはTAKTの対話モードでの指示書作成を担当しています。これまでの会話内容を、ピース実行用の具体的なタスク指示書に変換してください。
|
||||||
|
|
||||||
## 立ち位置
|
## 立ち位置
|
||||||
- あなた: 対話モード(タスク整理・指示書作成)
|
- あなた: 対話モード(タスク整理・指示書作成)
|
||||||
- 次のステップ: あなたが作成した指示書がワークフローに渡され、複数のAIエージェントが順次実行する
|
- 次のステップ: あなたが作成した指示書がピースに渡され、複数のAIエージェントが順次実行する
|
||||||
- あなたの成果物(指示書)が、ワークフロー全体の入力(タスク)になる
|
- あなたの成果物(指示書)が、ピース全体の入力(タスク)になる
|
||||||
|
|
||||||
## 要件
|
## 要件
|
||||||
- 出力はタスク指示書のみ(前置き不要)
|
- 出力はタスク指示書のみ(前置き不要)
|
||||||
@ -13,4 +13,4 @@
|
|||||||
- アシスタントが提案・推測した制約は指示書に含めない
|
- アシスタントが提案・推測した制約は指示書に含めない
|
||||||
- アシスタントの運用上の制約(実行禁止/ツール制限など)は指示に含めない
|
- アシスタントの運用上の制約(実行禁止/ツール制限など)は指示に含めない
|
||||||
- 情報不足があれば「Open Questions」セクションを短く付ける
|
- 情報不足があれば「Open Questions」セクションを短く付ける
|
||||||
- ワークフローが実行する具体的な作業内容を明記する
|
- ピースが実行する具体的な作業内容を明記する
|
||||||
|
|||||||
@ -1,23 +1,23 @@
|
|||||||
あなたはTAKT(AIエージェントワークフローオーケストレーションツール)の対話モードを担当しています。
|
あなたはTAKT(AIエージェントピースオーケストレーションツール)の対話モードを担当しています。
|
||||||
|
|
||||||
## TAKTの仕組み
|
## TAKTの仕組み
|
||||||
1. **対話モード(今ここ・あなたの役割)**: ユーザーと会話してタスクを整理し、ワークフロー実行用の具体的な指示書を作成する
|
1. **対話モード(今ここ・あなたの役割)**: ユーザーと会話してタスクを整理し、ピース実行用の具体的な指示書を作成する
|
||||||
2. **ワークフロー実行**: あなたが作成した指示書をワークフローに渡し、複数のAIエージェントが順次実行する(実装、レビュー、修正など)
|
2. **ピース実行**: あなたが作成した指示書をピースに渡し、複数のAIエージェントが順次実行する(実装、レビュー、修正など)
|
||||||
|
|
||||||
## 役割
|
## 役割
|
||||||
- あいまいな要求に対して確認質問をする
|
- あいまいな要求に対して確認質問をする
|
||||||
- ユーザーの要求を明確化し、指示書として洗練させる
|
- ユーザーの要求を明確化し、指示書として洗練させる
|
||||||
- ワークフローのエージェントが迷わないよう具体的な指示書を作成する
|
- ピースのエージェントが迷わないよう具体的な指示書を作成する
|
||||||
- 必要に応じて理解した内容を簡潔にまとめる
|
- 必要に応じて理解した内容を簡潔にまとめる
|
||||||
- 返答は簡潔で要点のみ
|
- 返答は簡潔で要点のみ
|
||||||
|
|
||||||
## 重要:ユーザーの意図を理解する
|
## 重要:ユーザーの意図を理解する
|
||||||
**ユーザーは「あなた」に作業を依頼しているのではなく、「ワークフロー」への指示書作成を依頼しています。**
|
**ユーザーは「あなた」に作業を依頼しているのではなく、「ピース」への指示書作成を依頼しています。**
|
||||||
|
|
||||||
ユーザーが次のように言った場合:
|
ユーザーが次のように言った場合:
|
||||||
- 「このコードをレビューして」→ ワークフローにレビューさせる(あなたは指示書を作成)
|
- 「このコードをレビューして」→ ピースにレビューさせる(あなたは指示書を作成)
|
||||||
- 「機能Xを実装して」→ ワークフローに実装させる(あなたは指示書を作成)
|
- 「機能Xを実装して」→ ピースに実装させる(あなたは指示書を作成)
|
||||||
- 「このバグを修正して」→ ワークフローに修正させる(あなたは指示書を作成)
|
- 「このバグを修正して」→ ピースに修正させる(あなたは指示書を作成)
|
||||||
|
|
||||||
これらは「あなた」への調査依頼ではありません。ファイルを読んだり、差分を確認したり、コードを探索したりしないでください。
|
これらは「あなた」への調査依頼ではありません。ファイルを読んだり、差分を確認したり、コードを探索したりしないでください。
|
||||||
|
|
||||||
@ -28,13 +28,13 @@
|
|||||||
- 「このプロジェクトは何をするもの?」✓
|
- 「このプロジェクトは何をするもの?」✓
|
||||||
|
|
||||||
## 調査が不適切な場合(ほとんどのケース)
|
## 調査が不適切な場合(ほとんどのケース)
|
||||||
ユーザーがワークフロー向けのタスクを説明している場合は調査しない:
|
ユーザーがピース向けのタスクを説明している場合は調査しない:
|
||||||
- 「変更をレビューして」✗(ワークフローの仕事)
|
- 「変更をレビューして」✗(ピースの仕事)
|
||||||
- 「コードを修正して」✗(ワークフローの仕事)
|
- 「コードを修正して」✗(ピースの仕事)
|
||||||
- 「Xを実装して」✗(ワークフローの仕事)
|
- 「Xを実装して」✗(ピースの仕事)
|
||||||
|
|
||||||
## 厳守事項
|
## 厳守事項
|
||||||
- あなたは要求の明確化のみを行う。実際の作業(実装/調査/レビュー等)はワークフローのエージェントが行う
|
- あなたは要求の明確化のみを行う。実際の作業(実装/調査/レビュー等)はピースのエージェントが行う
|
||||||
- ファイルの作成/編集/削除はしない(計画目的で明示的に依頼された場合を除く)
|
- ファイルの作成/編集/削除はしない(計画目的で明示的に依頼された場合を除く)
|
||||||
- build/test/install など状態を変えるコマンドは実行しない
|
- build/test/install など状態を変えるコマンドは実行しない
|
||||||
- Read/Glob/Grep/Bash を勝手に使わない。ユーザーが明示的に「あなた」に調査を依頼した場合のみ使用
|
- Read/Glob/Grep/Bash を勝手に使わない。ユーザーが明示的に「あなた」に調査を依頼した場合のみ使用
|
||||||
|
|||||||
@ -9,13 +9,13 @@ Tasks placed in this directory (.takt/tasks/) will be processed by TAKT.
|
|||||||
task: "Task description"
|
task: "Task description"
|
||||||
worktree: true # (optional) true | "/path/to/dir"
|
worktree: true # (optional) true | "/path/to/dir"
|
||||||
branch: "feat/my-feature" # (optional) branch name
|
branch: "feat/my-feature" # (optional) branch name
|
||||||
workflow: "default" # (optional) workflow name
|
piece: "default" # (optional) piece name
|
||||||
|
|
||||||
Fields:
|
Fields:
|
||||||
task (required) Task description (string)
|
task (required) Task description (string)
|
||||||
worktree (optional) true: create shared clone, "/path": clone at path
|
worktree (optional) true: create shared clone, "/path": clone at path
|
||||||
branch (optional) Branch name (auto-generated if omitted: takt/{timestamp}-{slug})
|
branch (optional) Branch name (auto-generated if omitted: takt/{timestamp}-{slug})
|
||||||
workflow (optional) Workflow name (uses current workflow if omitted)
|
piece (optional) Piece name (uses current piece if omitted)
|
||||||
|
|
||||||
## Markdown Format (Simple)
|
## Markdown Format (Simple)
|
||||||
|
|
||||||
|
|||||||
@ -18,7 +18,7 @@ vi.mock('../infra/providers/index.js', () => ({
|
|||||||
|
|
||||||
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
||||||
loadGlobalConfig: vi.fn(() => ({ provider: 'claude' })),
|
loadGlobalConfig: vi.fn(() => ({ provider: 'claude' })),
|
||||||
getBuiltinWorkflowsEnabled: vi.fn().mockReturnValue(true),
|
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../shared/prompt/index.js', () => ({
|
vi.mock('../shared/prompt/index.js', () => ({
|
||||||
@ -46,11 +46,11 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
|||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../features/tasks/execute/selectAndExecute.js', () => ({
|
vi.mock('../features/tasks/execute/selectAndExecute.js', () => ({
|
||||||
determineWorkflow: vi.fn(),
|
determinePiece: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../infra/config/loaders/workflowResolver.js', () => ({
|
vi.mock('../infra/config/loaders/pieceResolver.js', () => ({
|
||||||
getWorkflowDescription: vi.fn(() => ({ name: 'default', description: '' })),
|
getPieceDescription: vi.fn(() => ({ name: 'default', description: '' })),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../infra/github/issue.js', () => ({
|
vi.mock('../infra/github/issue.js', () => ({
|
||||||
@ -71,8 +71,8 @@ vi.mock('../infra/github/issue.js', () => ({
|
|||||||
import { interactiveMode } from '../features/interactive/index.js';
|
import { interactiveMode } from '../features/interactive/index.js';
|
||||||
import { promptInput, confirm } from '../shared/prompt/index.js';
|
import { promptInput, confirm } from '../shared/prompt/index.js';
|
||||||
import { summarizeTaskName } from '../infra/task/summarize.js';
|
import { summarizeTaskName } from '../infra/task/summarize.js';
|
||||||
import { determineWorkflow } from '../features/tasks/execute/selectAndExecute.js';
|
import { determinePiece } from '../features/tasks/execute/selectAndExecute.js';
|
||||||
import { getWorkflowDescription } from '../infra/config/loaders/workflowResolver.js';
|
import { getPieceDescription } from '../infra/config/loaders/pieceResolver.js';
|
||||||
import { resolveIssueTask } from '../infra/github/issue.js';
|
import { resolveIssueTask } from '../infra/github/issue.js';
|
||||||
import { addTask } from '../features/tasks/index.js';
|
import { addTask } from '../features/tasks/index.js';
|
||||||
|
|
||||||
@ -81,8 +81,8 @@ const mockInteractiveMode = vi.mocked(interactiveMode);
|
|||||||
const mockPromptInput = vi.mocked(promptInput);
|
const mockPromptInput = vi.mocked(promptInput);
|
||||||
const mockConfirm = vi.mocked(confirm);
|
const mockConfirm = vi.mocked(confirm);
|
||||||
const mockSummarizeTaskName = vi.mocked(summarizeTaskName);
|
const mockSummarizeTaskName = vi.mocked(summarizeTaskName);
|
||||||
const mockDetermineWorkflow = vi.mocked(determineWorkflow);
|
const mockDeterminePiece = vi.mocked(determinePiece);
|
||||||
const mockGetWorkflowDescription = vi.mocked(getWorkflowDescription);
|
const mockGetPieceDescription = vi.mocked(getPieceDescription);
|
||||||
|
|
||||||
function setupFullFlowMocks(overrides?: {
|
function setupFullFlowMocks(overrides?: {
|
||||||
task?: string;
|
task?: string;
|
||||||
@ -91,8 +91,8 @@ function setupFullFlowMocks(overrides?: {
|
|||||||
const task = overrides?.task ?? '# 認証機能追加\nJWT認証を実装する';
|
const task = overrides?.task ?? '# 認証機能追加\nJWT認証を実装する';
|
||||||
const slug = overrides?.slug ?? 'add-auth';
|
const slug = overrides?.slug ?? 'add-auth';
|
||||||
|
|
||||||
mockDetermineWorkflow.mockResolvedValue('default');
|
mockDeterminePiece.mockResolvedValue('default');
|
||||||
mockGetWorkflowDescription.mockReturnValue({ name: 'default', description: '' });
|
mockGetPieceDescription.mockReturnValue({ name: 'default', description: '' });
|
||||||
mockInteractiveMode.mockResolvedValue({ confirmed: true, task });
|
mockInteractiveMode.mockResolvedValue({ confirmed: true, task });
|
||||||
mockSummarizeTaskName.mockResolvedValue(slug);
|
mockSummarizeTaskName.mockResolvedValue(slug);
|
||||||
mockConfirm.mockResolvedValue(false);
|
mockConfirm.mockResolvedValue(false);
|
||||||
@ -103,8 +103,8 @@ let testDir: string;
|
|||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.clearAllMocks();
|
vi.clearAllMocks();
|
||||||
testDir = fs.mkdtempSync(path.join(tmpdir(), 'takt-test-'));
|
testDir = fs.mkdtempSync(path.join(tmpdir(), 'takt-test-'));
|
||||||
mockDetermineWorkflow.mockResolvedValue('default');
|
mockDeterminePiece.mockResolvedValue('default');
|
||||||
mockGetWorkflowDescription.mockReturnValue({ name: 'default', description: '' });
|
mockGetPieceDescription.mockReturnValue({ name: 'default', description: '' });
|
||||||
mockConfirm.mockResolvedValue(false);
|
mockConfirm.mockResolvedValue(false);
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -117,7 +117,7 @@ afterEach(() => {
|
|||||||
describe('addTask', () => {
|
describe('addTask', () => {
|
||||||
it('should cancel when interactive mode is not confirmed', async () => {
|
it('should cancel when interactive mode is not confirmed', async () => {
|
||||||
// Given: user cancels interactive mode
|
// Given: user cancels interactive mode
|
||||||
mockDetermineWorkflow.mockResolvedValue('default');
|
mockDeterminePiece.mockResolvedValue('default');
|
||||||
mockInteractiveMode.mockResolvedValue({ confirmed: false, task: '' });
|
mockInteractiveMode.mockResolvedValue({ confirmed: false, task: '' });
|
||||||
|
|
||||||
// When
|
// When
|
||||||
@ -221,48 +221,48 @@ describe('addTask', () => {
|
|||||||
expect(content).toContain('branch: feat/my-branch');
|
expect(content).toContain('branch: feat/my-branch');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should include workflow selection in task file', async () => {
|
it('should include piece selection in task file', async () => {
|
||||||
// Given: determineWorkflow returns a non-default workflow
|
// Given: determinePiece returns a non-default piece
|
||||||
setupFullFlowMocks({ slug: 'with-workflow' });
|
setupFullFlowMocks({ slug: 'with-piece' });
|
||||||
mockDetermineWorkflow.mockResolvedValue('review');
|
mockDeterminePiece.mockResolvedValue('review');
|
||||||
mockGetWorkflowDescription.mockReturnValue({ name: 'review', description: 'Code review workflow' });
|
mockGetPieceDescription.mockReturnValue({ name: 'review', description: 'Code review piece' });
|
||||||
mockConfirm.mockResolvedValue(false);
|
mockConfirm.mockResolvedValue(false);
|
||||||
|
|
||||||
// When
|
// When
|
||||||
await addTask(testDir);
|
await addTask(testDir);
|
||||||
|
|
||||||
// Then
|
// Then
|
||||||
const taskFile = path.join(testDir, '.takt', 'tasks', 'with-workflow.yaml');
|
const taskFile = path.join(testDir, '.takt', 'tasks', 'with-piece.yaml');
|
||||||
const content = fs.readFileSync(taskFile, 'utf-8');
|
const content = fs.readFileSync(taskFile, 'utf-8');
|
||||||
expect(content).toContain('workflow: review');
|
expect(content).toContain('piece: review');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should cancel when workflow selection returns null', async () => {
|
it('should cancel when piece selection returns null', async () => {
|
||||||
// Given: user cancels workflow selection
|
// Given: user cancels piece selection
|
||||||
mockDetermineWorkflow.mockResolvedValue(null);
|
mockDeterminePiece.mockResolvedValue(null);
|
||||||
|
|
||||||
// When
|
// When
|
||||||
await addTask(testDir);
|
await addTask(testDir);
|
||||||
|
|
||||||
// Then: no task file created (cancelled at workflow selection)
|
// Then: no task file created (cancelled at piece selection)
|
||||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
||||||
const files = fs.readdirSync(tasksDir);
|
const files = fs.readdirSync(tasksDir);
|
||||||
expect(files.length).toBe(0);
|
expect(files.length).toBe(0);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should always include workflow from determineWorkflow', async () => {
|
it('should always include piece from determinePiece', async () => {
|
||||||
// Given: determineWorkflow returns 'default'
|
// Given: determinePiece returns 'default'
|
||||||
setupFullFlowMocks({ slug: 'default-wf' });
|
setupFullFlowMocks({ slug: 'default-wf' });
|
||||||
mockDetermineWorkflow.mockResolvedValue('default');
|
mockDeterminePiece.mockResolvedValue('default');
|
||||||
mockConfirm.mockResolvedValue(false);
|
mockConfirm.mockResolvedValue(false);
|
||||||
|
|
||||||
// When
|
// When
|
||||||
await addTask(testDir);
|
await addTask(testDir);
|
||||||
|
|
||||||
// Then: workflow field is included
|
// Then: piece field is included
|
||||||
const taskFile = path.join(testDir, '.takt', 'tasks', 'default-wf.yaml');
|
const taskFile = path.join(testDir, '.takt', 'tasks', 'default-wf.yaml');
|
||||||
const content = fs.readFileSync(taskFile, 'utf-8');
|
const content = fs.readFileSync(taskFile, 'utf-8');
|
||||||
expect(content).toContain('workflow: default');
|
expect(content).toContain('piece: default');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should fetch issue and use directly as task content when given issue reference', async () => {
|
it('should fetch issue and use directly as task content when given issue reference', async () => {
|
||||||
|
|||||||
@ -84,7 +84,7 @@ describe('GlobalConfig load/save with API keys', () => {
|
|||||||
const yaml = [
|
const yaml = [
|
||||||
'language: en',
|
'language: en',
|
||||||
'trusted_directories: []',
|
'trusted_directories: []',
|
||||||
'default_workflow: default',
|
'default_piece: default',
|
||||||
'log_level: info',
|
'log_level: info',
|
||||||
'provider: claude',
|
'provider: claude',
|
||||||
'anthropic_api_key: sk-ant-from-yaml',
|
'anthropic_api_key: sk-ant-from-yaml',
|
||||||
@ -101,7 +101,7 @@ describe('GlobalConfig load/save with API keys', () => {
|
|||||||
const yaml = [
|
const yaml = [
|
||||||
'language: en',
|
'language: en',
|
||||||
'trusted_directories: []',
|
'trusted_directories: []',
|
||||||
'default_workflow: default',
|
'default_piece: default',
|
||||||
'log_level: info',
|
'log_level: info',
|
||||||
'provider: claude',
|
'provider: claude',
|
||||||
].join('\n');
|
].join('\n');
|
||||||
@ -117,7 +117,7 @@ describe('GlobalConfig load/save with API keys', () => {
|
|||||||
const yaml = [
|
const yaml = [
|
||||||
'language: en',
|
'language: en',
|
||||||
'trusted_directories: []',
|
'trusted_directories: []',
|
||||||
'default_workflow: default',
|
'default_piece: default',
|
||||||
'log_level: info',
|
'log_level: info',
|
||||||
'provider: claude',
|
'provider: claude',
|
||||||
].join('\n');
|
].join('\n');
|
||||||
@ -137,7 +137,7 @@ describe('GlobalConfig load/save with API keys', () => {
|
|||||||
const yaml = [
|
const yaml = [
|
||||||
'language: en',
|
'language: en',
|
||||||
'trusted_directories: []',
|
'trusted_directories: []',
|
||||||
'default_workflow: default',
|
'default_piece: default',
|
||||||
'log_level: info',
|
'log_level: info',
|
||||||
'provider: claude',
|
'provider: claude',
|
||||||
].join('\n');
|
].join('\n');
|
||||||
@ -174,7 +174,7 @@ describe('resolveAnthropicApiKey', () => {
|
|||||||
const yaml = [
|
const yaml = [
|
||||||
'language: en',
|
'language: en',
|
||||||
'trusted_directories: []',
|
'trusted_directories: []',
|
||||||
'default_workflow: default',
|
'default_piece: default',
|
||||||
'log_level: info',
|
'log_level: info',
|
||||||
'provider: claude',
|
'provider: claude',
|
||||||
'anthropic_api_key: sk-ant-from-yaml',
|
'anthropic_api_key: sk-ant-from-yaml',
|
||||||
@ -190,7 +190,7 @@ describe('resolveAnthropicApiKey', () => {
|
|||||||
const yaml = [
|
const yaml = [
|
||||||
'language: en',
|
'language: en',
|
||||||
'trusted_directories: []',
|
'trusted_directories: []',
|
||||||
'default_workflow: default',
|
'default_piece: default',
|
||||||
'log_level: info',
|
'log_level: info',
|
||||||
'provider: claude',
|
'provider: claude',
|
||||||
'anthropic_api_key: sk-ant-from-yaml',
|
'anthropic_api_key: sk-ant-from-yaml',
|
||||||
@ -206,7 +206,7 @@ describe('resolveAnthropicApiKey', () => {
|
|||||||
const yaml = [
|
const yaml = [
|
||||||
'language: en',
|
'language: en',
|
||||||
'trusted_directories: []',
|
'trusted_directories: []',
|
||||||
'default_workflow: default',
|
'default_piece: default',
|
||||||
'log_level: info',
|
'log_level: info',
|
||||||
'provider: claude',
|
'provider: claude',
|
||||||
].join('\n');
|
].join('\n');
|
||||||
@ -248,7 +248,7 @@ describe('resolveOpenaiApiKey', () => {
|
|||||||
const yaml = [
|
const yaml = [
|
||||||
'language: en',
|
'language: en',
|
||||||
'trusted_directories: []',
|
'trusted_directories: []',
|
||||||
'default_workflow: default',
|
'default_piece: default',
|
||||||
'log_level: info',
|
'log_level: info',
|
||||||
'provider: claude',
|
'provider: claude',
|
||||||
'openai_api_key: sk-openai-from-yaml',
|
'openai_api_key: sk-openai-from-yaml',
|
||||||
@ -264,7 +264,7 @@ describe('resolveOpenaiApiKey', () => {
|
|||||||
const yaml = [
|
const yaml = [
|
||||||
'language: en',
|
'language: en',
|
||||||
'trusted_directories: []',
|
'trusted_directories: []',
|
||||||
'default_workflow: default',
|
'default_piece: default',
|
||||||
'log_level: info',
|
'log_level: info',
|
||||||
'provider: claude',
|
'provider: claude',
|
||||||
'openai_api_key: sk-openai-from-yaml',
|
'openai_api_key: sk-openai-from-yaml',
|
||||||
@ -280,7 +280,7 @@ describe('resolveOpenaiApiKey', () => {
|
|||||||
const yaml = [
|
const yaml = [
|
||||||
'language: en',
|
'language: en',
|
||||||
'trusted_directories: []',
|
'trusted_directories: []',
|
||||||
'default_workflow: default',
|
'default_piece: default',
|
||||||
'log_level: info',
|
'log_level: info',
|
||||||
'provider: claude',
|
'provider: claude',
|
||||||
].join('\n');
|
].join('\n');
|
||||||
|
|||||||
@ -1,10 +1,10 @@
|
|||||||
/**
|
/**
|
||||||
* Tests for workflow bookmark functionality
|
* Tests for piece bookmark functionality
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect } from 'vitest';
|
import { describe, it, expect } from 'vitest';
|
||||||
import { handleKeyInput } from '../shared/prompt/index.js';
|
import { handleKeyInput } from '../shared/prompt/index.js';
|
||||||
import { applyBookmarks, type SelectionOption } from '../features/workflowSelection/index.js';
|
import { applyBookmarks, type SelectionOption } from '../features/pieceSelection/index.js';
|
||||||
|
|
||||||
describe('handleKeyInput - bookmark action', () => {
|
describe('handleKeyInput - bookmark action', () => {
|
||||||
const totalItems = 4;
|
const totalItems = 4;
|
||||||
@ -88,7 +88,7 @@ describe('applyBookmarks', () => {
|
|||||||
{ label: '📁 frontend/', value: '__category__:frontend' },
|
{ label: '📁 frontend/', value: '__category__:frontend' },
|
||||||
{ label: '📁 backend/', value: '__category__:backend' },
|
{ label: '📁 backend/', value: '__category__:backend' },
|
||||||
];
|
];
|
||||||
// Only workflow values should match; categories are not bookmarkable
|
// Only piece values should match; categories are not bookmarkable
|
||||||
const result = applyBookmarks(categoryOptions, ['simple']);
|
const result = applyBookmarks(categoryOptions, ['simple']);
|
||||||
expect(result[0]!.label).toBe('simple [*]');
|
expect(result[0]!.label).toBe('simple [*]');
|
||||||
expect(result.map((o) => o.value)).toEqual(['simple', '__category__:frontend', '__category__:backend']);
|
expect(result.map((o) => o.value)).toEqual(['simple', '__category__:frontend', '__category__:backend']);
|
||||||
|
|||||||
@ -53,19 +53,19 @@ vi.mock('../infra/config/index.js', () => ({
|
|||||||
|
|
||||||
vi.mock('../infra/config/paths.js', () => ({
|
vi.mock('../infra/config/paths.js', () => ({
|
||||||
clearAgentSessions: vi.fn(),
|
clearAgentSessions: vi.fn(),
|
||||||
getCurrentWorkflow: vi.fn(() => 'default'),
|
getCurrentPiece: vi.fn(() => 'default'),
|
||||||
isVerboseMode: vi.fn(() => false),
|
isVerboseMode: vi.fn(() => false),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../infra/config/loaders/workflowLoader.js', () => ({
|
vi.mock('../infra/config/loaders/pieceLoader.js', () => ({
|
||||||
listWorkflows: vi.fn(() => []),
|
listPieces: vi.fn(() => []),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../shared/constants.js', async (importOriginal) => {
|
vi.mock('../shared/constants.js', async (importOriginal) => {
|
||||||
const actual = await importOriginal<typeof import('../shared/constants.js')>();
|
const actual = await importOriginal<typeof import('../shared/constants.js')>();
|
||||||
return {
|
return {
|
||||||
...actual,
|
...actual,
|
||||||
DEFAULT_WORKFLOW_NAME: 'default',
|
DEFAULT_PIECE_NAME: 'default',
|
||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
@ -32,7 +32,7 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
|||||||
|
|
||||||
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
||||||
loadGlobalConfig: vi.fn(() => ({})),
|
loadGlobalConfig: vi.fn(() => ({})),
|
||||||
getBuiltinWorkflowsEnabled: vi.fn().mockReturnValue(true),
|
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
import { execFileSync } from 'node:child_process';
|
import { execFileSync } from 'node:child_process';
|
||||||
|
|||||||
@ -8,13 +8,13 @@ import { join } from 'node:path';
|
|||||||
import { tmpdir } from 'node:os';
|
import { tmpdir } from 'node:os';
|
||||||
import { randomUUID } from 'node:crypto';
|
import { randomUUID } from 'node:crypto';
|
||||||
import {
|
import {
|
||||||
getBuiltinWorkflow,
|
getBuiltinPiece,
|
||||||
loadAllWorkflows,
|
loadAllPieces,
|
||||||
loadWorkflow,
|
loadPiece,
|
||||||
listWorkflows,
|
listPieces,
|
||||||
loadAgentPromptFromPath,
|
loadAgentPromptFromPath,
|
||||||
getCurrentWorkflow,
|
getCurrentPiece,
|
||||||
setCurrentWorkflow,
|
setCurrentPiece,
|
||||||
getProjectConfigDir,
|
getProjectConfigDir,
|
||||||
getBuiltinAgentsDir,
|
getBuiltinAgentsDir,
|
||||||
loadInputHistory,
|
loadInputHistory,
|
||||||
@ -37,34 +37,34 @@ import {
|
|||||||
loadProjectConfig,
|
loadProjectConfig,
|
||||||
} from '../infra/config/index.js';
|
} from '../infra/config/index.js';
|
||||||
|
|
||||||
describe('getBuiltinWorkflow', () => {
|
describe('getBuiltinPiece', () => {
|
||||||
it('should return builtin workflow when it exists in resources', () => {
|
it('should return builtin piece when it exists in resources', () => {
|
||||||
const workflow = getBuiltinWorkflow('default');
|
const piece = getBuiltinPiece('default');
|
||||||
expect(workflow).not.toBeNull();
|
expect(piece).not.toBeNull();
|
||||||
expect(workflow!.name).toBe('default');
|
expect(piece!.name).toBe('default');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return null for non-existent workflow names', () => {
|
it('should return null for non-existent piece names', () => {
|
||||||
expect(getBuiltinWorkflow('passthrough')).toBeNull();
|
expect(getBuiltinPiece('passthrough')).toBeNull();
|
||||||
expect(getBuiltinWorkflow('unknown')).toBeNull();
|
expect(getBuiltinPiece('unknown')).toBeNull();
|
||||||
expect(getBuiltinWorkflow('')).toBeNull();
|
expect(getBuiltinPiece('')).toBeNull();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('default workflow parallel reviewers movement', () => {
|
describe('default piece parallel reviewers movement', () => {
|
||||||
it('should have a reviewers movement with parallel sub-movements', () => {
|
it('should have a reviewers movement with parallel sub-movements', () => {
|
||||||
const workflow = getBuiltinWorkflow('default');
|
const piece = getBuiltinPiece('default');
|
||||||
expect(workflow).not.toBeNull();
|
expect(piece).not.toBeNull();
|
||||||
|
|
||||||
const reviewersMovement = workflow!.movements.find((s) => s.name === 'reviewers');
|
const reviewersMovement = piece!.movements.find((s) => s.name === 'reviewers');
|
||||||
expect(reviewersMovement).toBeDefined();
|
expect(reviewersMovement).toBeDefined();
|
||||||
expect(reviewersMovement!.parallel).toBeDefined();
|
expect(reviewersMovement!.parallel).toBeDefined();
|
||||||
expect(reviewersMovement!.parallel).toHaveLength(2);
|
expect(reviewersMovement!.parallel).toHaveLength(2);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should have arch-review and security-review as parallel sub-movements', () => {
|
it('should have arch-review and security-review as parallel sub-movements', () => {
|
||||||
const workflow = getBuiltinWorkflow('default');
|
const piece = getBuiltinPiece('default');
|
||||||
const reviewersMovement = workflow!.movements.find((s) => s.name === 'reviewers')!;
|
const reviewersMovement = piece!.movements.find((s) => s.name === 'reviewers')!;
|
||||||
const subMovementNames = reviewersMovement.parallel!.map((s) => s.name);
|
const subMovementNames = reviewersMovement.parallel!.map((s) => s.name);
|
||||||
|
|
||||||
expect(subMovementNames).toContain('arch-review');
|
expect(subMovementNames).toContain('arch-review');
|
||||||
@ -72,8 +72,8 @@ describe('default workflow parallel reviewers movement', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should have aggregate conditions on the reviewers parent movement', () => {
|
it('should have aggregate conditions on the reviewers parent movement', () => {
|
||||||
const workflow = getBuiltinWorkflow('default');
|
const piece = getBuiltinPiece('default');
|
||||||
const reviewersMovement = workflow!.movements.find((s) => s.name === 'reviewers')!;
|
const reviewersMovement = piece!.movements.find((s) => s.name === 'reviewers')!;
|
||||||
|
|
||||||
expect(reviewersMovement.rules).toBeDefined();
|
expect(reviewersMovement.rules).toBeDefined();
|
||||||
expect(reviewersMovement.rules).toHaveLength(2);
|
expect(reviewersMovement.rules).toHaveLength(2);
|
||||||
@ -90,8 +90,8 @@ describe('default workflow parallel reviewers movement', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should have matching conditions on sub-movements for aggregation', () => {
|
it('should have matching conditions on sub-movements for aggregation', () => {
|
||||||
const workflow = getBuiltinWorkflow('default');
|
const piece = getBuiltinPiece('default');
|
||||||
const reviewersMovement = workflow!.movements.find((s) => s.name === 'reviewers')!;
|
const reviewersMovement = piece!.movements.find((s) => s.name === 'reviewers')!;
|
||||||
|
|
||||||
for (const subMovement of reviewersMovement.parallel!) {
|
for (const subMovement of reviewersMovement.parallel!) {
|
||||||
expect(subMovement.rules).toBeDefined();
|
expect(subMovement.rules).toBeDefined();
|
||||||
@ -102,32 +102,32 @@ describe('default workflow parallel reviewers movement', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should have ai_review transitioning to reviewers movement', () => {
|
it('should have ai_review transitioning to reviewers movement', () => {
|
||||||
const workflow = getBuiltinWorkflow('default');
|
const piece = getBuiltinPiece('default');
|
||||||
const aiReviewMovement = workflow!.movements.find((s) => s.name === 'ai_review')!;
|
const aiReviewMovement = piece!.movements.find((s) => s.name === 'ai_review')!;
|
||||||
|
|
||||||
const approveRule = aiReviewMovement.rules!.find((r) => r.next === 'reviewers');
|
const approveRule = aiReviewMovement.rules!.find((r) => r.next === 'reviewers');
|
||||||
expect(approveRule).toBeDefined();
|
expect(approveRule).toBeDefined();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should have ai_fix transitioning to ai_review movement', () => {
|
it('should have ai_fix transitioning to ai_review movement', () => {
|
||||||
const workflow = getBuiltinWorkflow('default');
|
const piece = getBuiltinPiece('default');
|
||||||
const aiFixMovement = workflow!.movements.find((s) => s.name === 'ai_fix')!;
|
const aiFixMovement = piece!.movements.find((s) => s.name === 'ai_fix')!;
|
||||||
|
|
||||||
const fixedRule = aiFixMovement.rules!.find((r) => r.next === 'ai_review');
|
const fixedRule = aiFixMovement.rules!.find((r) => r.next === 'ai_review');
|
||||||
expect(fixedRule).toBeDefined();
|
expect(fixedRule).toBeDefined();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should have fix movement transitioning back to reviewers', () => {
|
it('should have fix movement transitioning back to reviewers', () => {
|
||||||
const workflow = getBuiltinWorkflow('default');
|
const piece = getBuiltinPiece('default');
|
||||||
const fixMovement = workflow!.movements.find((s) => s.name === 'fix')!;
|
const fixMovement = piece!.movements.find((s) => s.name === 'fix')!;
|
||||||
|
|
||||||
const fixedRule = fixMovement.rules!.find((r) => r.next === 'reviewers');
|
const fixedRule = fixMovement.rules!.find((r) => r.next === 'reviewers');
|
||||||
expect(fixedRule).toBeDefined();
|
expect(fixedRule).toBeDefined();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should not have old separate review/security_review/improve movements', () => {
|
it('should not have old separate review/security_review/improve movements', () => {
|
||||||
const workflow = getBuiltinWorkflow('default');
|
const piece = getBuiltinPiece('default');
|
||||||
const movementNames = workflow!.movements.map((s) => s.name);
|
const movementNames = piece!.movements.map((s) => s.name);
|
||||||
|
|
||||||
expect(movementNames).not.toContain('review');
|
expect(movementNames).not.toContain('review');
|
||||||
expect(movementNames).not.toContain('security_review');
|
expect(movementNames).not.toContain('security_review');
|
||||||
@ -136,8 +136,8 @@ describe('default workflow parallel reviewers movement', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should have sub-movements with correct agents', () => {
|
it('should have sub-movements with correct agents', () => {
|
||||||
const workflow = getBuiltinWorkflow('default');
|
const piece = getBuiltinPiece('default');
|
||||||
const reviewersMovement = workflow!.movements.find((s) => s.name === 'reviewers')!;
|
const reviewersMovement = piece!.movements.find((s) => s.name === 'reviewers')!;
|
||||||
|
|
||||||
const archReview = reviewersMovement.parallel!.find((s) => s.name === 'arch-review')!;
|
const archReview = reviewersMovement.parallel!.find((s) => s.name === 'arch-review')!;
|
||||||
expect(archReview.agent).toContain('architecture-reviewer');
|
expect(archReview.agent).toContain('architecture-reviewer');
|
||||||
@ -147,8 +147,8 @@ describe('default workflow parallel reviewers movement', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should have reports configured on sub-movements', () => {
|
it('should have reports configured on sub-movements', () => {
|
||||||
const workflow = getBuiltinWorkflow('default');
|
const piece = getBuiltinPiece('default');
|
||||||
const reviewersMovement = workflow!.movements.find((s) => s.name === 'reviewers')!;
|
const reviewersMovement = piece!.movements.find((s) => s.name === 'reviewers')!;
|
||||||
|
|
||||||
const archReview = reviewersMovement.parallel!.find((s) => s.name === 'arch-review')!;
|
const archReview = reviewersMovement.parallel!.find((s) => s.name === 'arch-review')!;
|
||||||
expect(archReview.report).toBeDefined();
|
expect(archReview.report).toBeDefined();
|
||||||
@ -158,7 +158,7 @@ describe('default workflow parallel reviewers movement', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('loadAllWorkflows', () => {
|
describe('loadAllPieces', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -172,13 +172,13 @@ describe('loadAllWorkflows', () => {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should load project-local workflows when cwd is provided', () => {
|
it('should load project-local pieces when cwd is provided', () => {
|
||||||
const workflowsDir = join(testDir, '.takt', 'workflows');
|
const piecesDir = join(testDir, '.takt', 'pieces');
|
||||||
mkdirSync(workflowsDir, { recursive: true });
|
mkdirSync(piecesDir, { recursive: true });
|
||||||
|
|
||||||
const sampleWorkflow = `
|
const samplePiece = `
|
||||||
name: test-workflow
|
name: test-piece
|
||||||
description: Test workflow
|
description: Test piece
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
movements:
|
movements:
|
||||||
- name: step1
|
- name: step1
|
||||||
@ -188,38 +188,38 @@ movements:
|
|||||||
- condition: Task completed
|
- condition: Task completed
|
||||||
next: COMPLETE
|
next: COMPLETE
|
||||||
`;
|
`;
|
||||||
writeFileSync(join(workflowsDir, 'test.yaml'), sampleWorkflow);
|
writeFileSync(join(piecesDir, 'test.yaml'), samplePiece);
|
||||||
|
|
||||||
const workflows = loadAllWorkflows(testDir);
|
const pieces = loadAllPieces(testDir);
|
||||||
|
|
||||||
expect(workflows.has('test')).toBe(true);
|
expect(pieces.has('test')).toBe(true);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('loadWorkflow (builtin fallback)', () => {
|
describe('loadPiece (builtin fallback)', () => {
|
||||||
it('should load builtin workflow when user workflow does not exist', () => {
|
it('should load builtin piece when user piece does not exist', () => {
|
||||||
const workflow = loadWorkflow('default', process.cwd());
|
const piece = loadPiece('default', process.cwd());
|
||||||
expect(workflow).not.toBeNull();
|
expect(piece).not.toBeNull();
|
||||||
expect(workflow!.name).toBe('default');
|
expect(piece!.name).toBe('default');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return null for non-existent workflow', () => {
|
it('should return null for non-existent piece', () => {
|
||||||
const workflow = loadWorkflow('does-not-exist', process.cwd());
|
const piece = loadPiece('does-not-exist', process.cwd());
|
||||||
expect(workflow).toBeNull();
|
expect(piece).toBeNull();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should load builtin workflows like minimal, research', () => {
|
it('should load builtin pieces like minimal, research', () => {
|
||||||
const minimal = loadWorkflow('minimal', process.cwd());
|
const minimal = loadPiece('minimal', process.cwd());
|
||||||
expect(minimal).not.toBeNull();
|
expect(minimal).not.toBeNull();
|
||||||
expect(minimal!.name).toBe('minimal');
|
expect(minimal!.name).toBe('minimal');
|
||||||
|
|
||||||
const research = loadWorkflow('research', process.cwd());
|
const research = loadPiece('research', process.cwd());
|
||||||
expect(research).not.toBeNull();
|
expect(research).not.toBeNull();
|
||||||
expect(research!.name).toBe('research');
|
expect(research!.name).toBe('research');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('listWorkflows (builtin fallback)', () => {
|
describe('listPieces (builtin fallback)', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -233,20 +233,20 @@ describe('listWorkflows (builtin fallback)', () => {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should include builtin workflows', () => {
|
it('should include builtin pieces', () => {
|
||||||
const workflows = listWorkflows(testDir);
|
const pieces = listPieces(testDir);
|
||||||
expect(workflows).toContain('default');
|
expect(pieces).toContain('default');
|
||||||
expect(workflows).toContain('minimal');
|
expect(pieces).toContain('minimal');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return sorted list', () => {
|
it('should return sorted list', () => {
|
||||||
const workflows = listWorkflows(testDir);
|
const pieces = listPieces(testDir);
|
||||||
const sorted = [...workflows].sort();
|
const sorted = [...pieces].sort();
|
||||||
expect(workflows).toEqual(sorted);
|
expect(pieces).toEqual(sorted);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('loadAllWorkflows (builtin fallback)', () => {
|
describe('loadAllPieces (builtin fallback)', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -260,10 +260,10 @@ describe('loadAllWorkflows (builtin fallback)', () => {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should include builtin workflows in the map', () => {
|
it('should include builtin pieces in the map', () => {
|
||||||
const workflows = loadAllWorkflows(testDir);
|
const pieces = loadAllPieces(testDir);
|
||||||
expect(workflows.has('default')).toBe(true);
|
expect(pieces.has('default')).toBe(true);
|
||||||
expect(workflows.has('minimal')).toBe(true);
|
expect(pieces.has('minimal')).toBe(true);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -281,7 +281,7 @@ describe('loadAgentPromptFromPath (builtin paths)', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('getCurrentWorkflow', () => {
|
describe('getCurrentPiece', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -296,19 +296,19 @@ describe('getCurrentWorkflow', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should return default when no config exists', () => {
|
it('should return default when no config exists', () => {
|
||||||
const workflow = getCurrentWorkflow(testDir);
|
const piece = getCurrentPiece(testDir);
|
||||||
|
|
||||||
expect(workflow).toBe('default');
|
expect(piece).toBe('default');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return saved workflow name from config.yaml', () => {
|
it('should return saved piece name from config.yaml', () => {
|
||||||
const configDir = getProjectConfigDir(testDir);
|
const configDir = getProjectConfigDir(testDir);
|
||||||
mkdirSync(configDir, { recursive: true });
|
mkdirSync(configDir, { recursive: true });
|
||||||
writeFileSync(join(configDir, 'config.yaml'), 'workflow: default\n');
|
writeFileSync(join(configDir, 'config.yaml'), 'piece: default\n');
|
||||||
|
|
||||||
const workflow = getCurrentWorkflow(testDir);
|
const piece = getCurrentPiece(testDir);
|
||||||
|
|
||||||
expect(workflow).toBe('default');
|
expect(piece).toBe('default');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return default for empty config', () => {
|
it('should return default for empty config', () => {
|
||||||
@ -316,13 +316,13 @@ describe('getCurrentWorkflow', () => {
|
|||||||
mkdirSync(configDir, { recursive: true });
|
mkdirSync(configDir, { recursive: true });
|
||||||
writeFileSync(join(configDir, 'config.yaml'), '');
|
writeFileSync(join(configDir, 'config.yaml'), '');
|
||||||
|
|
||||||
const workflow = getCurrentWorkflow(testDir);
|
const piece = getCurrentPiece(testDir);
|
||||||
|
|
||||||
expect(workflow).toBe('default');
|
expect(piece).toBe('default');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('setCurrentWorkflow', () => {
|
describe('setCurrentPiece', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -336,30 +336,30 @@ describe('setCurrentWorkflow', () => {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should save workflow name to config.yaml', () => {
|
it('should save piece name to config.yaml', () => {
|
||||||
setCurrentWorkflow(testDir, 'my-workflow');
|
setCurrentPiece(testDir, 'my-piece');
|
||||||
|
|
||||||
const config = loadProjectConfig(testDir);
|
const config = loadProjectConfig(testDir);
|
||||||
|
|
||||||
expect(config.workflow).toBe('my-workflow');
|
expect(config.piece).toBe('my-piece');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should create config directory if not exists', () => {
|
it('should create config directory if not exists', () => {
|
||||||
const configDir = getProjectConfigDir(testDir);
|
const configDir = getProjectConfigDir(testDir);
|
||||||
expect(existsSync(configDir)).toBe(false);
|
expect(existsSync(configDir)).toBe(false);
|
||||||
|
|
||||||
setCurrentWorkflow(testDir, 'test');
|
setCurrentPiece(testDir, 'test');
|
||||||
|
|
||||||
expect(existsSync(configDir)).toBe(true);
|
expect(existsSync(configDir)).toBe(true);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should overwrite existing workflow name', () => {
|
it('should overwrite existing piece name', () => {
|
||||||
setCurrentWorkflow(testDir, 'first');
|
setCurrentPiece(testDir, 'first');
|
||||||
setCurrentWorkflow(testDir, 'second');
|
setCurrentPiece(testDir, 'second');
|
||||||
|
|
||||||
const workflow = getCurrentWorkflow(testDir);
|
const piece = getCurrentPiece(testDir);
|
||||||
|
|
||||||
expect(workflow).toBe('second');
|
expect(piece).toBe('second');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -592,7 +592,7 @@ describe('saveProjectConfig - gitignore copy', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should copy .gitignore when creating new config', () => {
|
it('should copy .gitignore when creating new config', () => {
|
||||||
setCurrentWorkflow(testDir, 'test');
|
setCurrentPiece(testDir, 'test');
|
||||||
|
|
||||||
const configDir = getProjectConfigDir(testDir);
|
const configDir = getProjectConfigDir(testDir);
|
||||||
const gitignorePath = join(configDir, '.gitignore');
|
const gitignorePath = join(configDir, '.gitignore');
|
||||||
@ -604,10 +604,10 @@ describe('saveProjectConfig - gitignore copy', () => {
|
|||||||
// Create config directory without .gitignore
|
// Create config directory without .gitignore
|
||||||
const configDir = getProjectConfigDir(testDir);
|
const configDir = getProjectConfigDir(testDir);
|
||||||
mkdirSync(configDir, { recursive: true });
|
mkdirSync(configDir, { recursive: true });
|
||||||
writeFileSync(join(configDir, 'config.yaml'), 'workflow: existing\n');
|
writeFileSync(join(configDir, 'config.yaml'), 'piece: existing\n');
|
||||||
|
|
||||||
// Save config should still copy .gitignore
|
// Save config should still copy .gitignore
|
||||||
setCurrentWorkflow(testDir, 'updated');
|
setCurrentPiece(testDir, 'updated');
|
||||||
|
|
||||||
const gitignorePath = join(configDir, '.gitignore');
|
const gitignorePath = join(configDir, '.gitignore');
|
||||||
expect(existsSync(gitignorePath)).toBe(true);
|
expect(existsSync(gitignorePath)).toBe(true);
|
||||||
@ -619,7 +619,7 @@ describe('saveProjectConfig - gitignore copy', () => {
|
|||||||
const customContent = '# Custom gitignore\nmy-custom-file';
|
const customContent = '# Custom gitignore\nmy-custom-file';
|
||||||
writeFileSync(join(configDir, '.gitignore'), customContent);
|
writeFileSync(join(configDir, '.gitignore'), customContent);
|
||||||
|
|
||||||
setCurrentWorkflow(testDir, 'test');
|
setCurrentPiece(testDir, 'test');
|
||||||
|
|
||||||
const gitignorePath = join(configDir, '.gitignore');
|
const gitignorePath = join(configDir, '.gitignore');
|
||||||
const content = readFileSync(gitignorePath, 'utf-8');
|
const content = readFileSync(gitignorePath, 'utf-8');
|
||||||
|
|||||||
@ -1,8 +1,8 @@
|
|||||||
/**
|
/**
|
||||||
* WorkflowEngine tests: abort (SIGINT) scenarios.
|
* PieceEngine tests: abort (SIGINT) scenarios.
|
||||||
*
|
*
|
||||||
* Covers:
|
* Covers:
|
||||||
* - abort() sets state to aborted and emits workflow:abort
|
* - abort() sets state to aborted and emits piece:abort
|
||||||
* - abort() during movement execution interrupts the movement
|
* - abort() during movement execution interrupts the movement
|
||||||
* - isAbortRequested() reflects abort state
|
* - isAbortRequested() reflects abort state
|
||||||
* - Double abort() is idempotent
|
* - Double abort() is idempotent
|
||||||
@ -10,7 +10,7 @@
|
|||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
import { existsSync, rmSync } from 'node:fs';
|
import { existsSync, rmSync } from 'node:fs';
|
||||||
import type { WorkflowConfig } from '../core/models/index.js';
|
import type { PieceConfig } from '../core/models/index.js';
|
||||||
|
|
||||||
// --- Mock setup (must be before imports that use these modules) ---
|
// --- Mock setup (must be before imports that use these modules) ---
|
||||||
|
|
||||||
@ -18,11 +18,11 @@ vi.mock('../agents/runner.js', () => ({
|
|||||||
runAgent: vi.fn(),
|
runAgent: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/evaluation/index.js', () => ({
|
vi.mock('../core/piece/evaluation/index.js', () => ({
|
||||||
detectMatchedRule: vi.fn(),
|
detectMatchedRule: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/phase-runner.js', () => ({
|
vi.mock('../core/piece/phase-runner.js', () => ({
|
||||||
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
||||||
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
||||||
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
||||||
@ -35,7 +35,7 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
|||||||
|
|
||||||
// --- Imports (after mocks) ---
|
// --- Imports (after mocks) ---
|
||||||
|
|
||||||
import { WorkflowEngine } from '../core/workflow/index.js';
|
import { PieceEngine } from '../core/piece/index.js';
|
||||||
import { runAgent } from '../agents/runner.js';
|
import { runAgent } from '../agents/runner.js';
|
||||||
import {
|
import {
|
||||||
makeResponse,
|
makeResponse,
|
||||||
@ -47,7 +47,7 @@ import {
|
|||||||
applyDefaultMocks,
|
applyDefaultMocks,
|
||||||
} from './engine-test-helpers.js';
|
} from './engine-test-helpers.js';
|
||||||
|
|
||||||
describe('WorkflowEngine: Abort (SIGINT)', () => {
|
describe('PieceEngine: Abort (SIGINT)', () => {
|
||||||
let tmpDir: string;
|
let tmpDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -62,7 +62,7 @@ describe('WorkflowEngine: Abort (SIGINT)', () => {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
function makeSimpleConfig(): WorkflowConfig {
|
function makeSimpleConfig(): PieceConfig {
|
||||||
return {
|
return {
|
||||||
name: 'test',
|
name: 'test',
|
||||||
maxIterations: 10,
|
maxIterations: 10,
|
||||||
@ -86,10 +86,10 @@ describe('WorkflowEngine: Abort (SIGINT)', () => {
|
|||||||
describe('abort() before run loop iteration', () => {
|
describe('abort() before run loop iteration', () => {
|
||||||
it('should abort immediately when abort() called before movement execution', async () => {
|
it('should abort immediately when abort() called before movement execution', async () => {
|
||||||
const config = makeSimpleConfig();
|
const config = makeSimpleConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
const abortFn = vi.fn();
|
const abortFn = vi.fn();
|
||||||
engine.on('workflow:abort', abortFn);
|
engine.on('piece:abort', abortFn);
|
||||||
|
|
||||||
// Call abort before run
|
// Call abort before run
|
||||||
engine.abort();
|
engine.abort();
|
||||||
@ -108,7 +108,7 @@ describe('WorkflowEngine: Abort (SIGINT)', () => {
|
|||||||
describe('abort() during movement execution', () => {
|
describe('abort() during movement execution', () => {
|
||||||
it('should abort when abort() is called during runAgent', async () => {
|
it('should abort when abort() is called during runAgent', async () => {
|
||||||
const config = makeSimpleConfig();
|
const config = makeSimpleConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
// Simulate abort during movement execution: runAgent rejects after abort() is called
|
// Simulate abort during movement execution: runAgent rejects after abort() is called
|
||||||
vi.mocked(runAgent).mockImplementation(async () => {
|
vi.mocked(runAgent).mockImplementation(async () => {
|
||||||
@ -117,7 +117,7 @@ describe('WorkflowEngine: Abort (SIGINT)', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
const abortFn = vi.fn();
|
const abortFn = vi.fn();
|
||||||
engine.on('workflow:abort', abortFn);
|
engine.on('piece:abort', abortFn);
|
||||||
|
|
||||||
const state = await engine.run();
|
const state = await engine.run();
|
||||||
|
|
||||||
@ -130,7 +130,7 @@ describe('WorkflowEngine: Abort (SIGINT)', () => {
|
|||||||
describe('abort() idempotency', () => {
|
describe('abort() idempotency', () => {
|
||||||
it('should remain abort-requested on multiple abort() calls', () => {
|
it('should remain abort-requested on multiple abort() calls', () => {
|
||||||
const config = makeSimpleConfig();
|
const config = makeSimpleConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
engine.abort();
|
engine.abort();
|
||||||
engine.abort();
|
engine.abort();
|
||||||
@ -143,14 +143,14 @@ describe('WorkflowEngine: Abort (SIGINT)', () => {
|
|||||||
describe('isAbortRequested()', () => {
|
describe('isAbortRequested()', () => {
|
||||||
it('should return false initially', () => {
|
it('should return false initially', () => {
|
||||||
const config = makeSimpleConfig();
|
const config = makeSimpleConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
expect(engine.isAbortRequested()).toBe(false);
|
expect(engine.isAbortRequested()).toBe(false);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return true after abort()', () => {
|
it('should return true after abort()', () => {
|
||||||
const config = makeSimpleConfig();
|
const config = makeSimpleConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
engine.abort();
|
engine.abort();
|
||||||
|
|
||||||
@ -161,7 +161,7 @@ describe('WorkflowEngine: Abort (SIGINT)', () => {
|
|||||||
describe('abort between movements', () => {
|
describe('abort between movements', () => {
|
||||||
it('should stop after completing current movement when abort() is called', async () => {
|
it('should stop after completing current movement when abort() is called', async () => {
|
||||||
const config = makeSimpleConfig();
|
const config = makeSimpleConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
// First movement completes normally, but abort is called during it
|
// First movement completes normally, but abort is called during it
|
||||||
vi.mocked(runAgent).mockImplementation(async () => {
|
vi.mocked(runAgent).mockImplementation(async () => {
|
||||||
@ -175,7 +175,7 @@ describe('WorkflowEngine: Abort (SIGINT)', () => {
|
|||||||
]);
|
]);
|
||||||
|
|
||||||
const abortFn = vi.fn();
|
const abortFn = vi.fn();
|
||||||
engine.on('workflow:abort', abortFn);
|
engine.on('piece:abort', abortFn);
|
||||||
|
|
||||||
const state = await engine.run();
|
const state = await engine.run();
|
||||||
|
|
||||||
|
|||||||
@ -1,7 +1,7 @@
|
|||||||
/**
|
/**
|
||||||
* Tests for WorkflowEngine provider/model overrides.
|
* Tests for PieceEngine provider/model overrides.
|
||||||
*
|
*
|
||||||
* Verifies that CLI-specified overrides take precedence over workflow movement defaults,
|
* Verifies that CLI-specified overrides take precedence over piece movement defaults,
|
||||||
* and that movement-specific values are used when no overrides are present.
|
* and that movement-specific values are used when no overrides are present.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
@ -11,11 +11,11 @@ vi.mock('../agents/runner.js', () => ({
|
|||||||
runAgent: vi.fn(),
|
runAgent: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/evaluation/index.js', () => ({
|
vi.mock('../core/piece/evaluation/index.js', () => ({
|
||||||
detectMatchedRule: vi.fn(),
|
detectMatchedRule: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/phase-runner.js', () => ({
|
vi.mock('../core/piece/phase-runner.js', () => ({
|
||||||
needsStatusJudgmentPhase: vi.fn(),
|
needsStatusJudgmentPhase: vi.fn(),
|
||||||
runReportPhase: vi.fn(),
|
runReportPhase: vi.fn(),
|
||||||
runStatusJudgmentPhase: vi.fn(),
|
runStatusJudgmentPhase: vi.fn(),
|
||||||
@ -26,9 +26,9 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
|||||||
generateReportDir: vi.fn().mockReturnValue('test-report-dir'),
|
generateReportDir: vi.fn().mockReturnValue('test-report-dir'),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
import { WorkflowEngine } from '../core/workflow/index.js';
|
import { PieceEngine } from '../core/piece/index.js';
|
||||||
import { runAgent } from '../agents/runner.js';
|
import { runAgent } from '../agents/runner.js';
|
||||||
import type { WorkflowConfig } from '../core/models/index.js';
|
import type { PieceConfig } from '../core/models/index.js';
|
||||||
import {
|
import {
|
||||||
makeResponse,
|
makeResponse,
|
||||||
makeRule,
|
makeRule,
|
||||||
@ -38,19 +38,19 @@ import {
|
|||||||
applyDefaultMocks,
|
applyDefaultMocks,
|
||||||
} from './engine-test-helpers.js';
|
} from './engine-test-helpers.js';
|
||||||
|
|
||||||
describe('WorkflowEngine agent overrides', () => {
|
describe('PieceEngine agent overrides', () => {
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.resetAllMocks();
|
vi.resetAllMocks();
|
||||||
applyDefaultMocks();
|
applyDefaultMocks();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('respects workflow movement provider/model even when CLI overrides are provided', async () => {
|
it('respects piece movement provider/model even when CLI overrides are provided', async () => {
|
||||||
const movement = makeMovement('plan', {
|
const movement = makeMovement('plan', {
|
||||||
provider: 'claude',
|
provider: 'claude',
|
||||||
model: 'claude-movement',
|
model: 'claude-movement',
|
||||||
rules: [makeRule('done', 'COMPLETE')],
|
rules: [makeRule('done', 'COMPLETE')],
|
||||||
});
|
});
|
||||||
const config: WorkflowConfig = {
|
const config: PieceConfig = {
|
||||||
name: 'override-test',
|
name: 'override-test',
|
||||||
movements: [movement],
|
movements: [movement],
|
||||||
initialMovement: 'plan',
|
initialMovement: 'plan',
|
||||||
@ -62,7 +62,7 @@ describe('WorkflowEngine agent overrides', () => {
|
|||||||
]);
|
]);
|
||||||
mockDetectMatchedRuleSequence([{ index: 0, method: 'phase1_tag' }]);
|
mockDetectMatchedRuleSequence([{ index: 0, method: 'phase1_tag' }]);
|
||||||
|
|
||||||
const engine = new WorkflowEngine(config, '/tmp/project', 'override task', {
|
const engine = new PieceEngine(config, '/tmp/project', 'override task', {
|
||||||
projectCwd: '/tmp/project',
|
projectCwd: '/tmp/project',
|
||||||
provider: 'codex',
|
provider: 'codex',
|
||||||
model: 'cli-model',
|
model: 'cli-model',
|
||||||
@ -75,11 +75,11 @@ describe('WorkflowEngine agent overrides', () => {
|
|||||||
expect(options.model).toBe('claude-movement');
|
expect(options.model).toBe('claude-movement');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('allows CLI overrides when workflow movement leaves provider/model undefined', async () => {
|
it('allows CLI overrides when piece movement leaves provider/model undefined', async () => {
|
||||||
const movement = makeMovement('plan', {
|
const movement = makeMovement('plan', {
|
||||||
rules: [makeRule('done', 'COMPLETE')],
|
rules: [makeRule('done', 'COMPLETE')],
|
||||||
});
|
});
|
||||||
const config: WorkflowConfig = {
|
const config: PieceConfig = {
|
||||||
name: 'override-fallback',
|
name: 'override-fallback',
|
||||||
movements: [movement],
|
movements: [movement],
|
||||||
initialMovement: 'plan',
|
initialMovement: 'plan',
|
||||||
@ -91,7 +91,7 @@ describe('WorkflowEngine agent overrides', () => {
|
|||||||
]);
|
]);
|
||||||
mockDetectMatchedRuleSequence([{ index: 0, method: 'phase1_tag' }]);
|
mockDetectMatchedRuleSequence([{ index: 0, method: 'phase1_tag' }]);
|
||||||
|
|
||||||
const engine = new WorkflowEngine(config, '/tmp/project', 'override task', {
|
const engine = new PieceEngine(config, '/tmp/project', 'override task', {
|
||||||
projectCwd: '/tmp/project',
|
projectCwd: '/tmp/project',
|
||||||
provider: 'codex',
|
provider: 'codex',
|
||||||
model: 'cli-model',
|
model: 'cli-model',
|
||||||
@ -104,13 +104,13 @@ describe('WorkflowEngine agent overrides', () => {
|
|||||||
expect(options.model).toBe('cli-model');
|
expect(options.model).toBe('cli-model');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('falls back to workflow movement provider/model when no overrides supplied', async () => {
|
it('falls back to piece movement provider/model when no overrides supplied', async () => {
|
||||||
const movement = makeMovement('plan', {
|
const movement = makeMovement('plan', {
|
||||||
provider: 'claude',
|
provider: 'claude',
|
||||||
model: 'movement-model',
|
model: 'movement-model',
|
||||||
rules: [makeRule('done', 'COMPLETE')],
|
rules: [makeRule('done', 'COMPLETE')],
|
||||||
});
|
});
|
||||||
const config: WorkflowConfig = {
|
const config: PieceConfig = {
|
||||||
name: 'movement-defaults',
|
name: 'movement-defaults',
|
||||||
movements: [movement],
|
movements: [movement],
|
||||||
initialMovement: 'plan',
|
initialMovement: 'plan',
|
||||||
@ -122,7 +122,7 @@ describe('WorkflowEngine agent overrides', () => {
|
|||||||
]);
|
]);
|
||||||
mockDetectMatchedRuleSequence([{ index: 0, method: 'phase1_tag' }]);
|
mockDetectMatchedRuleSequence([{ index: 0, method: 'phase1_tag' }]);
|
||||||
|
|
||||||
const engine = new WorkflowEngine(config, '/tmp/project', 'movement task', { projectCwd: '/tmp/project' });
|
const engine = new PieceEngine(config, '/tmp/project', 'movement task', { projectCwd: '/tmp/project' });
|
||||||
await engine.run();
|
await engine.run();
|
||||||
|
|
||||||
const options = vi.mocked(runAgent).mock.calls[0][2];
|
const options = vi.mocked(runAgent).mock.calls[0][2];
|
||||||
|
|||||||
@ -1,5 +1,5 @@
|
|||||||
/**
|
/**
|
||||||
* WorkflowEngine integration tests: blocked handling scenarios.
|
* PieceEngine integration tests: blocked handling scenarios.
|
||||||
*
|
*
|
||||||
* Covers:
|
* Covers:
|
||||||
* - Blocked without onUserInput callback (abort)
|
* - Blocked without onUserInput callback (abort)
|
||||||
@ -16,11 +16,11 @@ vi.mock('../agents/runner.js', () => ({
|
|||||||
runAgent: vi.fn(),
|
runAgent: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/evaluation/index.js', () => ({
|
vi.mock('../core/piece/evaluation/index.js', () => ({
|
||||||
detectMatchedRule: vi.fn(),
|
detectMatchedRule: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/phase-runner.js', () => ({
|
vi.mock('../core/piece/phase-runner.js', () => ({
|
||||||
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
||||||
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
||||||
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
||||||
@ -33,17 +33,17 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
|||||||
|
|
||||||
// --- Imports (after mocks) ---
|
// --- Imports (after mocks) ---
|
||||||
|
|
||||||
import { WorkflowEngine } from '../core/workflow/index.js';
|
import { PieceEngine } from '../core/piece/index.js';
|
||||||
import {
|
import {
|
||||||
makeResponse,
|
makeResponse,
|
||||||
buildDefaultWorkflowConfig,
|
buildDefaultPieceConfig,
|
||||||
mockRunAgentSequence,
|
mockRunAgentSequence,
|
||||||
mockDetectMatchedRuleSequence,
|
mockDetectMatchedRuleSequence,
|
||||||
createTestTmpDir,
|
createTestTmpDir,
|
||||||
applyDefaultMocks,
|
applyDefaultMocks,
|
||||||
} from './engine-test-helpers.js';
|
} from './engine-test-helpers.js';
|
||||||
|
|
||||||
describe('WorkflowEngine Integration: Blocked Handling', () => {
|
describe('PieceEngine Integration: Blocked Handling', () => {
|
||||||
let tmpDir: string;
|
let tmpDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -59,8 +59,8 @@ describe('WorkflowEngine Integration: Blocked Handling', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should abort when blocked and no onUserInput callback', async () => {
|
it('should abort when blocked and no onUserInput callback', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', status: 'blocked', content: 'Need clarification' }),
|
makeResponse({ agent: 'plan', status: 'blocked', content: 'Need clarification' }),
|
||||||
@ -73,7 +73,7 @@ describe('WorkflowEngine Integration: Blocked Handling', () => {
|
|||||||
const blockedFn = vi.fn();
|
const blockedFn = vi.fn();
|
||||||
const abortFn = vi.fn();
|
const abortFn = vi.fn();
|
||||||
engine.on('movement:blocked', blockedFn);
|
engine.on('movement:blocked', blockedFn);
|
||||||
engine.on('workflow:abort', abortFn);
|
engine.on('piece:abort', abortFn);
|
||||||
|
|
||||||
const state = await engine.run();
|
const state = await engine.run();
|
||||||
|
|
||||||
@ -83,9 +83,9 @@ describe('WorkflowEngine Integration: Blocked Handling', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should abort when blocked and onUserInput returns null', async () => {
|
it('should abort when blocked and onUserInput returns null', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const onUserInput = vi.fn().mockResolvedValue(null);
|
const onUserInput = vi.fn().mockResolvedValue(null);
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir, onUserInput });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir, onUserInput });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', status: 'blocked', content: 'Need info' }),
|
makeResponse({ agent: 'plan', status: 'blocked', content: 'Need info' }),
|
||||||
@ -102,9 +102,9 @@ describe('WorkflowEngine Integration: Blocked Handling', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should continue when blocked and onUserInput provides input', async () => {
|
it('should continue when blocked and onUserInput provides input', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const onUserInput = vi.fn().mockResolvedValueOnce('User provided clarification');
|
const onUserInput = vi.fn().mockResolvedValueOnce('User provided clarification');
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir, onUserInput });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir, onUserInput });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
// First: plan is blocked
|
// First: plan is blocked
|
||||||
|
|||||||
@ -1,5 +1,5 @@
|
|||||||
/**
|
/**
|
||||||
* WorkflowEngine integration tests: error handling scenarios.
|
* PieceEngine integration tests: error handling scenarios.
|
||||||
*
|
*
|
||||||
* Covers:
|
* Covers:
|
||||||
* - No rule matched (abort)
|
* - No rule matched (abort)
|
||||||
@ -17,11 +17,11 @@ vi.mock('../agents/runner.js', () => ({
|
|||||||
runAgent: vi.fn(),
|
runAgent: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/evaluation/index.js', () => ({
|
vi.mock('../core/piece/evaluation/index.js', () => ({
|
||||||
detectMatchedRule: vi.fn(),
|
detectMatchedRule: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/phase-runner.js', () => ({
|
vi.mock('../core/piece/phase-runner.js', () => ({
|
||||||
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
||||||
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
||||||
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
||||||
@ -34,21 +34,21 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
|||||||
|
|
||||||
// --- Imports (after mocks) ---
|
// --- Imports (after mocks) ---
|
||||||
|
|
||||||
import { WorkflowEngine } from '../core/workflow/index.js';
|
import { PieceEngine } from '../core/piece/index.js';
|
||||||
import { runAgent } from '../agents/runner.js';
|
import { runAgent } from '../agents/runner.js';
|
||||||
import { detectMatchedRule } from '../core/workflow/index.js';
|
import { detectMatchedRule } from '../core/piece/index.js';
|
||||||
import {
|
import {
|
||||||
makeResponse,
|
makeResponse,
|
||||||
makeMovement,
|
makeMovement,
|
||||||
makeRule,
|
makeRule,
|
||||||
buildDefaultWorkflowConfig,
|
buildDefaultPieceConfig,
|
||||||
mockRunAgentSequence,
|
mockRunAgentSequence,
|
||||||
mockDetectMatchedRuleSequence,
|
mockDetectMatchedRuleSequence,
|
||||||
createTestTmpDir,
|
createTestTmpDir,
|
||||||
applyDefaultMocks,
|
applyDefaultMocks,
|
||||||
} from './engine-test-helpers.js';
|
} from './engine-test-helpers.js';
|
||||||
|
|
||||||
describe('WorkflowEngine Integration: Error Handling', () => {
|
describe('PieceEngine Integration: Error Handling', () => {
|
||||||
let tmpDir: string;
|
let tmpDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -68,8 +68,8 @@ describe('WorkflowEngine Integration: Error Handling', () => {
|
|||||||
// =====================================================
|
// =====================================================
|
||||||
describe('No rule matched', () => {
|
describe('No rule matched', () => {
|
||||||
it('should abort when detectMatchedRule returns undefined', async () => {
|
it('should abort when detectMatchedRule returns undefined', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Unclear output' }),
|
makeResponse({ agent: 'plan', content: 'Unclear output' }),
|
||||||
@ -78,7 +78,7 @@ describe('WorkflowEngine Integration: Error Handling', () => {
|
|||||||
mockDetectMatchedRuleSequence([undefined]);
|
mockDetectMatchedRuleSequence([undefined]);
|
||||||
|
|
||||||
const abortFn = vi.fn();
|
const abortFn = vi.fn();
|
||||||
engine.on('workflow:abort', abortFn);
|
engine.on('piece:abort', abortFn);
|
||||||
|
|
||||||
const state = await engine.run();
|
const state = await engine.run();
|
||||||
|
|
||||||
@ -94,13 +94,13 @@ describe('WorkflowEngine Integration: Error Handling', () => {
|
|||||||
// =====================================================
|
// =====================================================
|
||||||
describe('runAgent throws', () => {
|
describe('runAgent throws', () => {
|
||||||
it('should abort when runAgent throws an error', async () => {
|
it('should abort when runAgent throws an error', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
vi.mocked(runAgent).mockRejectedValueOnce(new Error('API connection failed'));
|
vi.mocked(runAgent).mockRejectedValueOnce(new Error('API connection failed'));
|
||||||
|
|
||||||
const abortFn = vi.fn();
|
const abortFn = vi.fn();
|
||||||
engine.on('workflow:abort', abortFn);
|
engine.on('piece:abort', abortFn);
|
||||||
|
|
||||||
const state = await engine.run();
|
const state = await engine.run();
|
||||||
|
|
||||||
@ -116,7 +116,7 @@ describe('WorkflowEngine Integration: Error Handling', () => {
|
|||||||
// =====================================================
|
// =====================================================
|
||||||
describe('Loop detection', () => {
|
describe('Loop detection', () => {
|
||||||
it('should abort when loop detected with action: abort', async () => {
|
it('should abort when loop detected with action: abort', async () => {
|
||||||
const config = buildDefaultWorkflowConfig({
|
const config = buildDefaultPieceConfig({
|
||||||
maxIterations: 100,
|
maxIterations: 100,
|
||||||
loopDetection: { maxConsecutiveSameStep: 3, action: 'abort' },
|
loopDetection: { maxConsecutiveSameStep: 3, action: 'abort' },
|
||||||
initialMovement: 'loop-step',
|
initialMovement: 'loop-step',
|
||||||
@ -127,7 +127,7 @@ describe('WorkflowEngine Integration: Error Handling', () => {
|
|||||||
],
|
],
|
||||||
});
|
});
|
||||||
|
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
for (let i = 0; i < 5; i++) {
|
for (let i = 0; i < 5; i++) {
|
||||||
vi.mocked(runAgent).mockResolvedValueOnce(
|
vi.mocked(runAgent).mockResolvedValueOnce(
|
||||||
@ -139,7 +139,7 @@ describe('WorkflowEngine Integration: Error Handling', () => {
|
|||||||
}
|
}
|
||||||
|
|
||||||
const abortFn = vi.fn();
|
const abortFn = vi.fn();
|
||||||
engine.on('workflow:abort', abortFn);
|
engine.on('piece:abort', abortFn);
|
||||||
|
|
||||||
const state = await engine.run();
|
const state = await engine.run();
|
||||||
|
|
||||||
@ -156,8 +156,8 @@ describe('WorkflowEngine Integration: Error Handling', () => {
|
|||||||
// =====================================================
|
// =====================================================
|
||||||
describe('Iteration limit', () => {
|
describe('Iteration limit', () => {
|
||||||
it('should abort when max iterations reached without onIterationLimit callback', async () => {
|
it('should abort when max iterations reached without onIterationLimit callback', async () => {
|
||||||
const config = buildDefaultWorkflowConfig({ maxIterations: 2 });
|
const config = buildDefaultPieceConfig({ maxIterations: 2 });
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
||||||
@ -174,7 +174,7 @@ describe('WorkflowEngine Integration: Error Handling', () => {
|
|||||||
const limitFn = vi.fn();
|
const limitFn = vi.fn();
|
||||||
const abortFn = vi.fn();
|
const abortFn = vi.fn();
|
||||||
engine.on('iteration:limit', limitFn);
|
engine.on('iteration:limit', limitFn);
|
||||||
engine.on('workflow:abort', abortFn);
|
engine.on('piece:abort', abortFn);
|
||||||
|
|
||||||
const state = await engine.run();
|
const state = await engine.run();
|
||||||
|
|
||||||
@ -186,11 +186,11 @@ describe('WorkflowEngine Integration: Error Handling', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should extend iterations when onIterationLimit provides additional iterations', async () => {
|
it('should extend iterations when onIterationLimit provides additional iterations', async () => {
|
||||||
const config = buildDefaultWorkflowConfig({ maxIterations: 2 });
|
const config = buildDefaultPieceConfig({ maxIterations: 2 });
|
||||||
|
|
||||||
const onIterationLimit = vi.fn().mockResolvedValueOnce(10);
|
const onIterationLimit = vi.fn().mockResolvedValueOnce(10);
|
||||||
|
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', {
|
const engine = new PieceEngine(config, tmpDir, 'test task', {
|
||||||
projectCwd: tmpDir,
|
projectCwd: tmpDir,
|
||||||
onIterationLimit,
|
onIterationLimit,
|
||||||
});
|
});
|
||||||
|
|||||||
@ -1,5 +1,5 @@
|
|||||||
/**
|
/**
|
||||||
* WorkflowEngine integration tests: happy path and normal flow scenarios.
|
* PieceEngine integration tests: happy path and normal flow scenarios.
|
||||||
*
|
*
|
||||||
* Covers:
|
* Covers:
|
||||||
* - Full happy path (plan → implement → ai_review → reviewers → supervise → COMPLETE)
|
* - Full happy path (plan → implement → ai_review → reviewers → supervise → COMPLETE)
|
||||||
@ -13,7 +13,7 @@
|
|||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
import { existsSync, rmSync } from 'node:fs';
|
import { existsSync, rmSync } from 'node:fs';
|
||||||
import type { WorkflowConfig, WorkflowMovement } from '../core/models/index.js';
|
import type { PieceConfig, PieceMovement } from '../core/models/index.js';
|
||||||
|
|
||||||
// --- Mock setup (must be before imports that use these modules) ---
|
// --- Mock setup (must be before imports that use these modules) ---
|
||||||
|
|
||||||
@ -21,11 +21,11 @@ vi.mock('../agents/runner.js', () => ({
|
|||||||
runAgent: vi.fn(),
|
runAgent: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/evaluation/index.js', () => ({
|
vi.mock('../core/piece/evaluation/index.js', () => ({
|
||||||
detectMatchedRule: vi.fn(),
|
detectMatchedRule: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/phase-runner.js', () => ({
|
vi.mock('../core/piece/phase-runner.js', () => ({
|
||||||
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
||||||
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
||||||
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
||||||
@ -38,20 +38,20 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
|||||||
|
|
||||||
// --- Imports (after mocks) ---
|
// --- Imports (after mocks) ---
|
||||||
|
|
||||||
import { WorkflowEngine } from '../core/workflow/index.js';
|
import { PieceEngine } from '../core/piece/index.js';
|
||||||
import { runAgent } from '../agents/runner.js';
|
import { runAgent } from '../agents/runner.js';
|
||||||
import {
|
import {
|
||||||
makeResponse,
|
makeResponse,
|
||||||
makeMovement,
|
makeMovement,
|
||||||
makeRule,
|
makeRule,
|
||||||
buildDefaultWorkflowConfig,
|
buildDefaultPieceConfig,
|
||||||
mockRunAgentSequence,
|
mockRunAgentSequence,
|
||||||
mockDetectMatchedRuleSequence,
|
mockDetectMatchedRuleSequence,
|
||||||
createTestTmpDir,
|
createTestTmpDir,
|
||||||
applyDefaultMocks,
|
applyDefaultMocks,
|
||||||
} from './engine-test-helpers.js';
|
} from './engine-test-helpers.js';
|
||||||
|
|
||||||
describe('WorkflowEngine Integration: Happy Path', () => {
|
describe('PieceEngine Integration: Happy Path', () => {
|
||||||
let tmpDir: string;
|
let tmpDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -71,8 +71,8 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
// =====================================================
|
// =====================================================
|
||||||
describe('Happy path', () => {
|
describe('Happy path', () => {
|
||||||
it('should complete: plan → implement → ai_review → reviewers(all approved) → supervise → COMPLETE', async () => {
|
it('should complete: plan → implement → ai_review → reviewers(all approved) → supervise → COMPLETE', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan complete' }),
|
makeResponse({ agent: 'plan', content: 'Plan complete' }),
|
||||||
@ -94,7 +94,7 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
]);
|
]);
|
||||||
|
|
||||||
const completeFn = vi.fn();
|
const completeFn = vi.fn();
|
||||||
engine.on('workflow:complete', completeFn);
|
engine.on('piece:complete', completeFn);
|
||||||
|
|
||||||
const state = await engine.run();
|
const state = await engine.run();
|
||||||
|
|
||||||
@ -110,8 +110,8 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
// =====================================================
|
// =====================================================
|
||||||
describe('Review reject and fix loop', () => {
|
describe('Review reject and fix loop', () => {
|
||||||
it('should handle: reviewers(needs_fix) → fix → reviewers(all approved) → supervise → COMPLETE', async () => {
|
it('should handle: reviewers(needs_fix) → fix → reviewers(all approved) → supervise → COMPLETE', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
||||||
@ -151,8 +151,8 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should inject latest reviewers output as Previous Response for repeated fix steps', async () => {
|
it('should inject latest reviewers output as Previous Response for repeated fix steps', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
||||||
@ -220,8 +220,8 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should use the latest movement output across different steps for Previous Response', async () => {
|
it('should use the latest movement output across different steps for Previous Response', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
||||||
@ -278,8 +278,8 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
// =====================================================
|
// =====================================================
|
||||||
describe('AI review reject and fix', () => {
|
describe('AI review reject and fix', () => {
|
||||||
it('should handle: ai_review(issues) → ai_fix → reviewers → supervise → COMPLETE', async () => {
|
it('should handle: ai_review(issues) → ai_fix → reviewers → supervise → COMPLETE', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
||||||
@ -315,8 +315,8 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
// =====================================================
|
// =====================================================
|
||||||
describe('ABORT transition', () => {
|
describe('ABORT transition', () => {
|
||||||
it('should abort when movement transitions to ABORT', async () => {
|
it('should abort when movement transitions to ABORT', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Requirements unclear' }),
|
makeResponse({ agent: 'plan', content: 'Requirements unclear' }),
|
||||||
@ -328,7 +328,7 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
]);
|
]);
|
||||||
|
|
||||||
const abortFn = vi.fn();
|
const abortFn = vi.fn();
|
||||||
engine.on('workflow:abort', abortFn);
|
engine.on('piece:abort', abortFn);
|
||||||
|
|
||||||
const state = await engine.run();
|
const state = await engine.run();
|
||||||
|
|
||||||
@ -342,8 +342,8 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
// =====================================================
|
// =====================================================
|
||||||
describe('Event emissions', () => {
|
describe('Event emissions', () => {
|
||||||
it('should emit movement:start and movement:complete for each movement', async () => {
|
it('should emit movement:start and movement:complete for each movement', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan' }),
|
makeResponse({ agent: 'plan', content: 'Plan' }),
|
||||||
@ -375,12 +375,12 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
expect(startFn).toHaveBeenCalledTimes(5);
|
expect(startFn).toHaveBeenCalledTimes(5);
|
||||||
expect(completeFn).toHaveBeenCalledTimes(5);
|
expect(completeFn).toHaveBeenCalledTimes(5);
|
||||||
|
|
||||||
const startedMovements = startFn.mock.calls.map(call => (call[0] as WorkflowMovement).name);
|
const startedMovements = startFn.mock.calls.map(call => (call[0] as PieceMovement).name);
|
||||||
expect(startedMovements).toEqual(['plan', 'implement', 'ai_review', 'reviewers', 'supervise']);
|
expect(startedMovements).toEqual(['plan', 'implement', 'ai_review', 'reviewers', 'supervise']);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should pass instruction to movement:start for normal movements', async () => {
|
it('should pass instruction to movement:start for normal movements', async () => {
|
||||||
const simpleConfig: WorkflowConfig = {
|
const simpleConfig: PieceConfig = {
|
||||||
name: 'test',
|
name: 'test',
|
||||||
maxIterations: 10,
|
maxIterations: 10,
|
||||||
initialMovement: 'plan',
|
initialMovement: 'plan',
|
||||||
@ -390,7 +390,7 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
}),
|
}),
|
||||||
],
|
],
|
||||||
};
|
};
|
||||||
const engine = new WorkflowEngine(simpleConfig, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(simpleConfig, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
||||||
@ -412,8 +412,8 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should pass empty instruction to movement:start for parallel movements', async () => {
|
it('should pass empty instruction to movement:start for parallel movements', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan' }),
|
makeResponse({ agent: 'plan', content: 'Plan' }),
|
||||||
@ -441,7 +441,7 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
|
|
||||||
// Find the "reviewers" movement:start call (parallel movement)
|
// Find the "reviewers" movement:start call (parallel movement)
|
||||||
const reviewersCall = startFn.mock.calls.find(
|
const reviewersCall = startFn.mock.calls.find(
|
||||||
(call) => (call[0] as WorkflowMovement).name === 'reviewers'
|
(call) => (call[0] as PieceMovement).name === 'reviewers'
|
||||||
);
|
);
|
||||||
expect(reviewersCall).toBeDefined();
|
expect(reviewersCall).toBeDefined();
|
||||||
// Parallel movements emit empty string for instruction
|
// Parallel movements emit empty string for instruction
|
||||||
@ -450,8 +450,8 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should emit iteration:limit when max iterations reached', async () => {
|
it('should emit iteration:limit when max iterations reached', async () => {
|
||||||
const config = buildDefaultWorkflowConfig({ maxIterations: 1 });
|
const config = buildDefaultPieceConfig({ maxIterations: 1 });
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan' }),
|
makeResponse({ agent: 'plan', content: 'Plan' }),
|
||||||
@ -474,8 +474,8 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
// =====================================================
|
// =====================================================
|
||||||
describe('Movement output tracking', () => {
|
describe('Movement output tracking', () => {
|
||||||
it('should store outputs for all executed movements', async () => {
|
it('should store outputs for all executed movements', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan output' }),
|
makeResponse({ agent: 'plan', content: 'Plan output' }),
|
||||||
@ -510,7 +510,7 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
// =====================================================
|
// =====================================================
|
||||||
describe('Phase events', () => {
|
describe('Phase events', () => {
|
||||||
it('should emit phase:start and phase:complete events for Phase 1', async () => {
|
it('should emit phase:start and phase:complete events for Phase 1', async () => {
|
||||||
const simpleConfig: WorkflowConfig = {
|
const simpleConfig: PieceConfig = {
|
||||||
name: 'test',
|
name: 'test',
|
||||||
maxIterations: 10,
|
maxIterations: 10,
|
||||||
initialMovement: 'plan',
|
initialMovement: 'plan',
|
||||||
@ -520,7 +520,7 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
}),
|
}),
|
||||||
],
|
],
|
||||||
};
|
};
|
||||||
const engine = new WorkflowEngine(simpleConfig, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(simpleConfig, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
||||||
@ -547,8 +547,8 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should emit phase events for all movements in happy path', async () => {
|
it('should emit phase events for all movements in happy path', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan' }),
|
makeResponse({ agent: 'plan', content: 'Plan' }),
|
||||||
@ -593,15 +593,15 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
// =====================================================
|
// =====================================================
|
||||||
describe('Config validation', () => {
|
describe('Config validation', () => {
|
||||||
it('should throw when initial movement does not exist', () => {
|
it('should throw when initial movement does not exist', () => {
|
||||||
const config = buildDefaultWorkflowConfig({ initialMovement: 'nonexistent' });
|
const config = buildDefaultPieceConfig({ initialMovement: 'nonexistent' });
|
||||||
|
|
||||||
expect(() => {
|
expect(() => {
|
||||||
new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
}).toThrow('Unknown movement: nonexistent');
|
}).toThrow('Unknown movement: nonexistent');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should throw when rule references nonexistent movement', () => {
|
it('should throw when rule references nonexistent movement', () => {
|
||||||
const config: WorkflowConfig = {
|
const config: PieceConfig = {
|
||||||
name: 'test',
|
name: 'test',
|
||||||
maxIterations: 10,
|
maxIterations: 10,
|
||||||
initialMovement: 'step1',
|
initialMovement: 'step1',
|
||||||
@ -613,7 +613,7 @@ describe('WorkflowEngine Integration: Happy Path', () => {
|
|||||||
};
|
};
|
||||||
|
|
||||||
expect(() => {
|
expect(() => {
|
||||||
new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
}).toThrow('nonexistent_step');
|
}).toThrow('nonexistent_step');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@ -1,5 +1,5 @@
|
|||||||
/**
|
/**
|
||||||
* WorkflowEngine integration tests: parallel movement aggregation.
|
* PieceEngine integration tests: parallel movement aggregation.
|
||||||
*
|
*
|
||||||
* Covers:
|
* Covers:
|
||||||
* - Aggregated output format (## headers and --- separators)
|
* - Aggregated output format (## headers and --- separators)
|
||||||
@ -16,11 +16,11 @@ vi.mock('../agents/runner.js', () => ({
|
|||||||
runAgent: vi.fn(),
|
runAgent: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/evaluation/index.js', () => ({
|
vi.mock('../core/piece/evaluation/index.js', () => ({
|
||||||
detectMatchedRule: vi.fn(),
|
detectMatchedRule: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/phase-runner.js', () => ({
|
vi.mock('../core/piece/phase-runner.js', () => ({
|
||||||
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
||||||
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
||||||
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
||||||
@ -33,18 +33,18 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
|||||||
|
|
||||||
// --- Imports (after mocks) ---
|
// --- Imports (after mocks) ---
|
||||||
|
|
||||||
import { WorkflowEngine } from '../core/workflow/index.js';
|
import { PieceEngine } from '../core/piece/index.js';
|
||||||
import { runAgent } from '../agents/runner.js';
|
import { runAgent } from '../agents/runner.js';
|
||||||
import {
|
import {
|
||||||
makeResponse,
|
makeResponse,
|
||||||
buildDefaultWorkflowConfig,
|
buildDefaultPieceConfig,
|
||||||
mockRunAgentSequence,
|
mockRunAgentSequence,
|
||||||
mockDetectMatchedRuleSequence,
|
mockDetectMatchedRuleSequence,
|
||||||
createTestTmpDir,
|
createTestTmpDir,
|
||||||
applyDefaultMocks,
|
applyDefaultMocks,
|
||||||
} from './engine-test-helpers.js';
|
} from './engine-test-helpers.js';
|
||||||
|
|
||||||
describe('WorkflowEngine Integration: Parallel Movement Aggregation', () => {
|
describe('PieceEngine Integration: Parallel Movement Aggregation', () => {
|
||||||
let tmpDir: string;
|
let tmpDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -60,8 +60,8 @@ describe('WorkflowEngine Integration: Parallel Movement Aggregation', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should aggregate sub-movement outputs with ## headers and --- separators', async () => {
|
it('should aggregate sub-movement outputs with ## headers and --- separators', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
makeResponse({ agent: 'plan', content: 'Plan done' }),
|
||||||
@ -97,8 +97,8 @@ describe('WorkflowEngine Integration: Parallel Movement Aggregation', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should store individual sub-movement outputs in movementOutputs', async () => {
|
it('should store individual sub-movement outputs in movementOutputs', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan' }),
|
makeResponse({ agent: 'plan', content: 'Plan' }),
|
||||||
@ -129,8 +129,8 @@ describe('WorkflowEngine Integration: Parallel Movement Aggregation', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should execute sub-movements concurrently (both runAgent calls happen)', async () => {
|
it('should execute sub-movements concurrently (both runAgent calls happen)', async () => {
|
||||||
const config = buildDefaultWorkflowConfig();
|
const config = buildDefaultPieceConfig();
|
||||||
const engine = new WorkflowEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||||
|
|
||||||
mockRunAgentSequence([
|
mockRunAgentSequence([
|
||||||
makeResponse({ agent: 'plan', content: 'Plan' }),
|
makeResponse({ agent: 'plan', content: 'Plan' }),
|
||||||
|
|||||||
@ -8,8 +8,8 @@ import { join } from 'node:path';
|
|||||||
import { tmpdir } from 'node:os';
|
import { tmpdir } from 'node:os';
|
||||||
import { EventEmitter } from 'node:events';
|
import { EventEmitter } from 'node:events';
|
||||||
import { existsSync } from 'node:fs';
|
import { existsSync } from 'node:fs';
|
||||||
import { isReportObjectConfig } from '../core/workflow/index.js';
|
import { isReportObjectConfig } from '../core/piece/index.js';
|
||||||
import type { WorkflowMovement, ReportObjectConfig, ReportConfig } from '../core/models/index.js';
|
import type { PieceMovement, ReportObjectConfig, ReportConfig } from '../core/models/index.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Extracted emitMovementReports logic for unit testing.
|
* Extracted emitMovementReports logic for unit testing.
|
||||||
@ -19,7 +19,7 @@ import type { WorkflowMovement, ReportObjectConfig, ReportConfig } from '../core
|
|||||||
*/
|
*/
|
||||||
function emitMovementReports(
|
function emitMovementReports(
|
||||||
emitter: EventEmitter,
|
emitter: EventEmitter,
|
||||||
movement: WorkflowMovement,
|
movement: PieceMovement,
|
||||||
reportDir: string,
|
reportDir: string,
|
||||||
projectCwd: string,
|
projectCwd: string,
|
||||||
): void {
|
): void {
|
||||||
@ -39,7 +39,7 @@ function emitMovementReports(
|
|||||||
|
|
||||||
function emitIfReportExists(
|
function emitIfReportExists(
|
||||||
emitter: EventEmitter,
|
emitter: EventEmitter,
|
||||||
movement: WorkflowMovement,
|
movement: PieceMovement,
|
||||||
baseDir: string,
|
baseDir: string,
|
||||||
fileName: string,
|
fileName: string,
|
||||||
): void {
|
): void {
|
||||||
@ -49,8 +49,8 @@ function emitIfReportExists(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Create a minimal WorkflowMovement for testing */
|
/** Create a minimal PieceMovement for testing */
|
||||||
function createMovement(overrides: Partial<WorkflowMovement> = {}): WorkflowMovement {
|
function createMovement(overrides: Partial<PieceMovement> = {}): PieceMovement {
|
||||||
return {
|
return {
|
||||||
name: 'test-movement',
|
name: 'test-movement',
|
||||||
agent: 'coder',
|
agent: 'coder',
|
||||||
|
|||||||
@ -1,7 +1,7 @@
|
|||||||
/**
|
/**
|
||||||
* Shared helpers for WorkflowEngine integration tests.
|
* Shared helpers for PieceEngine integration tests.
|
||||||
*
|
*
|
||||||
* Provides mock setup, factory functions, and a default workflow config
|
* Provides mock setup, factory functions, and a default piece config
|
||||||
* matching the parallel reviewers structure (plan → implement → ai_review → reviewers → supervise).
|
* matching the parallel reviewers structure (plan → implement → ai_review → reviewers → supervise).
|
||||||
*/
|
*/
|
||||||
|
|
||||||
@ -10,14 +10,14 @@ import { mkdirSync } from 'node:fs';
|
|||||||
import { join } from 'node:path';
|
import { join } from 'node:path';
|
||||||
import { tmpdir } from 'node:os';
|
import { tmpdir } from 'node:os';
|
||||||
import { randomUUID } from 'node:crypto';
|
import { randomUUID } from 'node:crypto';
|
||||||
import type { WorkflowConfig, WorkflowMovement, AgentResponse, WorkflowRule } from '../core/models/index.js';
|
import type { PieceConfig, PieceMovement, AgentResponse, PieceRule } from '../core/models/index.js';
|
||||||
|
|
||||||
// --- Mock imports (consumers must call vi.mock before importing this) ---
|
// --- Mock imports (consumers must call vi.mock before importing this) ---
|
||||||
|
|
||||||
import { runAgent } from '../agents/runner.js';
|
import { runAgent } from '../agents/runner.js';
|
||||||
import { detectMatchedRule } from '../core/workflow/index.js';
|
import { detectMatchedRule } from '../core/piece/index.js';
|
||||||
import type { RuleMatch } from '../core/workflow/index.js';
|
import type { RuleMatch } from '../core/piece/index.js';
|
||||||
import { needsStatusJudgmentPhase, runReportPhase, runStatusJudgmentPhase } from '../core/workflow/index.js';
|
import { needsStatusJudgmentPhase, runReportPhase, runStatusJudgmentPhase } from '../core/piece/index.js';
|
||||||
import { generateReportDir } from '../shared/utils/index.js';
|
import { generateReportDir } from '../shared/utils/index.js';
|
||||||
|
|
||||||
// --- Factory functions ---
|
// --- Factory functions ---
|
||||||
@ -33,11 +33,11 @@ export function makeResponse(overrides: Partial<AgentResponse> = {}): AgentRespo
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
export function makeRule(condition: string, next: string, extra: Partial<WorkflowRule> = {}): WorkflowRule {
|
export function makeRule(condition: string, next: string, extra: Partial<PieceRule> = {}): PieceRule {
|
||||||
return { condition, next, ...extra };
|
return { condition, next, ...extra };
|
||||||
}
|
}
|
||||||
|
|
||||||
export function makeMovement(name: string, overrides: Partial<WorkflowMovement> = {}): WorkflowMovement {
|
export function makeMovement(name: string, overrides: Partial<PieceMovement> = {}): PieceMovement {
|
||||||
return {
|
return {
|
||||||
name,
|
name,
|
||||||
agent: `../agents/${name}.md`,
|
agent: `../agents/${name}.md`,
|
||||||
@ -49,10 +49,10 @@ export function makeMovement(name: string, overrides: Partial<WorkflowMovement>
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Build a workflow config matching the default.yaml parallel reviewers structure:
|
* Build a piece config matching the default.yaml parallel reviewers structure:
|
||||||
* plan → implement → ai_review → (ai_fix↔) → reviewers(parallel) → (fix↔) → supervise
|
* plan → implement → ai_review → (ai_fix↔) → reviewers(parallel) → (fix↔) → supervise
|
||||||
*/
|
*/
|
||||||
export function buildDefaultWorkflowConfig(overrides: Partial<WorkflowConfig> = {}): WorkflowConfig {
|
export function buildDefaultPieceConfig(overrides: Partial<PieceConfig> = {}): PieceConfig {
|
||||||
const archReviewSubMovement = makeMovement('arch-review', {
|
const archReviewSubMovement = makeMovement('arch-review', {
|
||||||
rules: [
|
rules: [
|
||||||
makeRule('approved', 'COMPLETE'),
|
makeRule('approved', 'COMPLETE'),
|
||||||
@ -69,7 +69,7 @@ export function buildDefaultWorkflowConfig(overrides: Partial<WorkflowConfig> =
|
|||||||
|
|
||||||
return {
|
return {
|
||||||
name: 'test-default',
|
name: 'test-default',
|
||||||
description: 'Test workflow',
|
description: 'Test piece',
|
||||||
maxIterations: 30,
|
maxIterations: 30,
|
||||||
initialMovement: 'plan',
|
initialMovement: 'plan',
|
||||||
movements: [
|
movements: [
|
||||||
|
|||||||
@ -17,11 +17,11 @@ vi.mock('../agents/runner.js', () => ({
|
|||||||
runAgent: vi.fn(),
|
runAgent: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/evaluation/index.js', () => ({
|
vi.mock('../core/piece/evaluation/index.js', () => ({
|
||||||
detectMatchedRule: vi.fn(),
|
detectMatchedRule: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/phase-runner.js', () => ({
|
vi.mock('../core/piece/phase-runner.js', () => ({
|
||||||
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
||||||
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
||||||
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
||||||
@ -34,8 +34,8 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
|||||||
|
|
||||||
// --- Imports (after mocks) ---
|
// --- Imports (after mocks) ---
|
||||||
|
|
||||||
import { WorkflowEngine } from '../core/workflow/index.js';
|
import { PieceEngine } from '../core/piece/index.js';
|
||||||
import { runReportPhase } from '../core/workflow/index.js';
|
import { runReportPhase } from '../core/piece/index.js';
|
||||||
import {
|
import {
|
||||||
makeResponse,
|
makeResponse,
|
||||||
makeMovement,
|
makeMovement,
|
||||||
@ -44,7 +44,7 @@ import {
|
|||||||
mockDetectMatchedRuleSequence,
|
mockDetectMatchedRuleSequence,
|
||||||
applyDefaultMocks,
|
applyDefaultMocks,
|
||||||
} from './engine-test-helpers.js';
|
} from './engine-test-helpers.js';
|
||||||
import type { WorkflowConfig } from '../core/models/index.js';
|
import type { PieceConfig } from '../core/models/index.js';
|
||||||
|
|
||||||
function createWorktreeDirs(): { projectCwd: string; cloneCwd: string } {
|
function createWorktreeDirs(): { projectCwd: string; cloneCwd: string } {
|
||||||
const base = join(tmpdir(), `takt-worktree-test-${randomUUID()}`);
|
const base = join(tmpdir(), `takt-worktree-test-${randomUUID()}`);
|
||||||
@ -64,10 +64,10 @@ function createWorktreeDirs(): { projectCwd: string; cloneCwd: string } {
|
|||||||
return { projectCwd, cloneCwd };
|
return { projectCwd, cloneCwd };
|
||||||
}
|
}
|
||||||
|
|
||||||
function buildSimpleConfig(): WorkflowConfig {
|
function buildSimpleConfig(): PieceConfig {
|
||||||
return {
|
return {
|
||||||
name: 'worktree-test',
|
name: 'worktree-test',
|
||||||
description: 'Test workflow for worktree',
|
description: 'Test piece for worktree',
|
||||||
maxIterations: 10,
|
maxIterations: 10,
|
||||||
initialMovement: 'review',
|
initialMovement: 'review',
|
||||||
movements: [
|
movements: [
|
||||||
@ -81,7 +81,7 @@ function buildSimpleConfig(): WorkflowConfig {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
describe('WorkflowEngine: worktree reportDir resolution', () => {
|
describe('PieceEngine: worktree reportDir resolution', () => {
|
||||||
let projectCwd: string;
|
let projectCwd: string;
|
||||||
let cloneCwd: string;
|
let cloneCwd: string;
|
||||||
let baseDir: string;
|
let baseDir: string;
|
||||||
@ -104,7 +104,7 @@ describe('WorkflowEngine: worktree reportDir resolution', () => {
|
|||||||
it('should pass projectCwd-based reportDir to phase runner context in worktree mode', async () => {
|
it('should pass projectCwd-based reportDir to phase runner context in worktree mode', async () => {
|
||||||
// Given: worktree environment where cwd !== projectCwd
|
// Given: worktree environment where cwd !== projectCwd
|
||||||
const config = buildSimpleConfig();
|
const config = buildSimpleConfig();
|
||||||
const engine = new WorkflowEngine(config, cloneCwd, 'test task', {
|
const engine = new PieceEngine(config, cloneCwd, 'test task', {
|
||||||
projectCwd,
|
projectCwd,
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -115,7 +115,7 @@ describe('WorkflowEngine: worktree reportDir resolution', () => {
|
|||||||
{ index: 0, method: 'tag' as const },
|
{ index: 0, method: 'tag' as const },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
// When: run the workflow
|
// When: run the piece
|
||||||
await engine.run();
|
await engine.run();
|
||||||
|
|
||||||
// Then: runReportPhase was called with context containing projectCwd-based reportDir
|
// Then: runReportPhase was called with context containing projectCwd-based reportDir
|
||||||
@ -133,7 +133,7 @@ describe('WorkflowEngine: worktree reportDir resolution', () => {
|
|||||||
|
|
||||||
it('should pass projectCwd-based reportDir to buildInstruction (used by {report_dir} placeholder)', async () => {
|
it('should pass projectCwd-based reportDir to buildInstruction (used by {report_dir} placeholder)', async () => {
|
||||||
// Given: worktree environment with a movement that uses {report_dir} in template
|
// Given: worktree environment with a movement that uses {report_dir} in template
|
||||||
const config: WorkflowConfig = {
|
const config: PieceConfig = {
|
||||||
name: 'worktree-test',
|
name: 'worktree-test',
|
||||||
description: 'Test',
|
description: 'Test',
|
||||||
maxIterations: 10,
|
maxIterations: 10,
|
||||||
@ -148,7 +148,7 @@ describe('WorkflowEngine: worktree reportDir resolution', () => {
|
|||||||
}),
|
}),
|
||||||
],
|
],
|
||||||
};
|
};
|
||||||
const engine = new WorkflowEngine(config, cloneCwd, 'test task', {
|
const engine = new PieceEngine(config, cloneCwd, 'test task', {
|
||||||
projectCwd,
|
projectCwd,
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -160,7 +160,7 @@ describe('WorkflowEngine: worktree reportDir resolution', () => {
|
|||||||
{ index: 0, method: 'tag' as const },
|
{ index: 0, method: 'tag' as const },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
// When: run the workflow
|
// When: run the piece
|
||||||
await engine.run();
|
await engine.run();
|
||||||
|
|
||||||
// Then: the instruction should contain projectCwd-based reportDir
|
// Then: the instruction should contain projectCwd-based reportDir
|
||||||
@ -178,7 +178,7 @@ describe('WorkflowEngine: worktree reportDir resolution', () => {
|
|||||||
// Given: normal environment where cwd === projectCwd
|
// Given: normal environment where cwd === projectCwd
|
||||||
const normalDir = projectCwd;
|
const normalDir = projectCwd;
|
||||||
const config = buildSimpleConfig();
|
const config = buildSimpleConfig();
|
||||||
const engine = new WorkflowEngine(config, normalDir, 'test task', {
|
const engine = new PieceEngine(config, normalDir, 'test task', {
|
||||||
projectCwd: normalDir,
|
projectCwd: normalDir,
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -189,7 +189,7 @@ describe('WorkflowEngine: worktree reportDir resolution', () => {
|
|||||||
{ index: 0, method: 'tag' as const },
|
{ index: 0, method: 'tag' as const },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
// When: run the workflow
|
// When: run the piece
|
||||||
await engine.run();
|
await engine.run();
|
||||||
|
|
||||||
// Then: reportDir should be the same (cwd === projectCwd)
|
// Then: reportDir should be the same (cwd === projectCwd)
|
||||||
|
|||||||
@ -7,7 +7,7 @@ import {
|
|||||||
EXIT_SUCCESS,
|
EXIT_SUCCESS,
|
||||||
EXIT_GENERAL_ERROR,
|
EXIT_GENERAL_ERROR,
|
||||||
EXIT_ISSUE_FETCH_FAILED,
|
EXIT_ISSUE_FETCH_FAILED,
|
||||||
EXIT_WORKFLOW_FAILED,
|
EXIT_PIECE_FAILED,
|
||||||
EXIT_GIT_OPERATION_FAILED,
|
EXIT_GIT_OPERATION_FAILED,
|
||||||
EXIT_PR_CREATION_FAILED,
|
EXIT_PR_CREATION_FAILED,
|
||||||
EXIT_SIGINT,
|
EXIT_SIGINT,
|
||||||
@ -19,7 +19,7 @@ describe('exit codes', () => {
|
|||||||
EXIT_SUCCESS,
|
EXIT_SUCCESS,
|
||||||
EXIT_GENERAL_ERROR,
|
EXIT_GENERAL_ERROR,
|
||||||
EXIT_ISSUE_FETCH_FAILED,
|
EXIT_ISSUE_FETCH_FAILED,
|
||||||
EXIT_WORKFLOW_FAILED,
|
EXIT_PIECE_FAILED,
|
||||||
EXIT_GIT_OPERATION_FAILED,
|
EXIT_GIT_OPERATION_FAILED,
|
||||||
EXIT_PR_CREATION_FAILED,
|
EXIT_PR_CREATION_FAILED,
|
||||||
EXIT_SIGINT,
|
EXIT_SIGINT,
|
||||||
@ -32,7 +32,7 @@ describe('exit codes', () => {
|
|||||||
expect(EXIT_SUCCESS).toBe(0);
|
expect(EXIT_SUCCESS).toBe(0);
|
||||||
expect(EXIT_GENERAL_ERROR).toBe(1);
|
expect(EXIT_GENERAL_ERROR).toBe(1);
|
||||||
expect(EXIT_ISSUE_FETCH_FAILED).toBe(2);
|
expect(EXIT_ISSUE_FETCH_FAILED).toBe(2);
|
||||||
expect(EXIT_WORKFLOW_FAILED).toBe(3);
|
expect(EXIT_PIECE_FAILED).toBe(3);
|
||||||
expect(EXIT_GIT_OPERATION_FAILED).toBe(4);
|
expect(EXIT_GIT_OPERATION_FAILED).toBe(4);
|
||||||
expect(EXIT_PR_CREATION_FAILED).toBe(5);
|
expect(EXIT_PR_CREATION_FAILED).toBe(5);
|
||||||
expect(EXIT_SIGINT).toBe(130);
|
expect(EXIT_SIGINT).toBe(130);
|
||||||
|
|||||||
@ -19,12 +19,12 @@ describe('buildPrBody', () => {
|
|||||||
comments: [],
|
comments: [],
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = buildPrBody(issue, 'Workflow `default` completed.');
|
const result = buildPrBody(issue, 'Piece `default` completed.');
|
||||||
|
|
||||||
expect(result).toContain('## Summary');
|
expect(result).toContain('## Summary');
|
||||||
expect(result).toContain('Implement username/password authentication.');
|
expect(result).toContain('Implement username/password authentication.');
|
||||||
expect(result).toContain('## Execution Report');
|
expect(result).toContain('## Execution Report');
|
||||||
expect(result).toContain('Workflow `default` completed.');
|
expect(result).toContain('Piece `default` completed.');
|
||||||
expect(result).toContain('Closes #99');
|
expect(result).toContain('Closes #99');
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
@ -40,7 +40,7 @@ describe('loadGlobalConfig', () => {
|
|||||||
|
|
||||||
expect(config.language).toBe('en');
|
expect(config.language).toBe('en');
|
||||||
expect(config.trustedDirectories).toEqual([]);
|
expect(config.trustedDirectories).toEqual([]);
|
||||||
expect(config.defaultWorkflow).toBe('default');
|
expect(config.defaultPiece).toBe('default');
|
||||||
expect(config.logLevel).toBe('info');
|
expect(config.logLevel).toBe('info');
|
||||||
expect(config.provider).toBe('claude');
|
expect(config.provider).toBe('claude');
|
||||||
expect(config.model).toBeUndefined();
|
expect(config.model).toBeUndefined();
|
||||||
|
|||||||
@ -35,7 +35,7 @@ describe('getLabel', () => {
|
|||||||
|
|
||||||
describe('template variable substitution', () => {
|
describe('template variable substitution', () => {
|
||||||
it('replaces {variableName} placeholders with provided values', () => {
|
it('replaces {variableName} placeholders with provided values', () => {
|
||||||
const result = getLabel('workflow.iterationLimit.maxReached', undefined, {
|
const result = getLabel('piece.iterationLimit.maxReached', undefined, {
|
||||||
currentIteration: '5',
|
currentIteration: '5',
|
||||||
maxIterations: '10',
|
maxIterations: '10',
|
||||||
});
|
});
|
||||||
@ -43,14 +43,14 @@ describe('getLabel', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('replaces single variable', () => {
|
it('replaces single variable', () => {
|
||||||
const result = getLabel('workflow.notifyComplete', undefined, {
|
const result = getLabel('piece.notifyComplete', undefined, {
|
||||||
iteration: '3',
|
iteration: '3',
|
||||||
});
|
});
|
||||||
expect(result).toContain('3 iterations');
|
expect(result).toContain('3 iterations');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('leaves unmatched placeholders as-is', () => {
|
it('leaves unmatched placeholders as-is', () => {
|
||||||
const result = getLabel('workflow.notifyAbort', undefined, {});
|
const result = getLabel('piece.notifyAbort', undefined, {});
|
||||||
expect(result).toContain('{reason}');
|
expect(result).toContain('{reason}');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
@ -100,29 +100,29 @@ describe('label integrity', () => {
|
|||||||
expect(ui).toHaveProperty('cancelled');
|
expect(ui).toHaveProperty('cancelled');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('contains all expected workflow keys in en', () => {
|
it('contains all expected piece keys in en', () => {
|
||||||
expect(() => getLabel('workflow.iterationLimit.maxReached')).not.toThrow();
|
expect(() => getLabel('piece.iterationLimit.maxReached')).not.toThrow();
|
||||||
expect(() => getLabel('workflow.iterationLimit.currentMovement')).not.toThrow();
|
expect(() => getLabel('piece.iterationLimit.currentMovement')).not.toThrow();
|
||||||
expect(() => getLabel('workflow.iterationLimit.continueQuestion')).not.toThrow();
|
expect(() => getLabel('piece.iterationLimit.continueQuestion')).not.toThrow();
|
||||||
expect(() => getLabel('workflow.iterationLimit.continueLabel')).not.toThrow();
|
expect(() => getLabel('piece.iterationLimit.continueLabel')).not.toThrow();
|
||||||
expect(() => getLabel('workflow.iterationLimit.continueDescription')).not.toThrow();
|
expect(() => getLabel('piece.iterationLimit.continueDescription')).not.toThrow();
|
||||||
expect(() => getLabel('workflow.iterationLimit.stopLabel')).not.toThrow();
|
expect(() => getLabel('piece.iterationLimit.stopLabel')).not.toThrow();
|
||||||
expect(() => getLabel('workflow.iterationLimit.inputPrompt')).not.toThrow();
|
expect(() => getLabel('piece.iterationLimit.inputPrompt')).not.toThrow();
|
||||||
expect(() => getLabel('workflow.iterationLimit.invalidInput')).not.toThrow();
|
expect(() => getLabel('piece.iterationLimit.invalidInput')).not.toThrow();
|
||||||
expect(() => getLabel('workflow.iterationLimit.userInputPrompt')).not.toThrow();
|
expect(() => getLabel('piece.iterationLimit.userInputPrompt')).not.toThrow();
|
||||||
expect(() => getLabel('workflow.notifyComplete')).not.toThrow();
|
expect(() => getLabel('piece.notifyComplete')).not.toThrow();
|
||||||
expect(() => getLabel('workflow.notifyAbort')).not.toThrow();
|
expect(() => getLabel('piece.notifyAbort')).not.toThrow();
|
||||||
expect(() => getLabel('workflow.sigintGraceful')).not.toThrow();
|
expect(() => getLabel('piece.sigintGraceful')).not.toThrow();
|
||||||
expect(() => getLabel('workflow.sigintForce')).not.toThrow();
|
expect(() => getLabel('piece.sigintForce')).not.toThrow();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('en and ja have the same key structure', () => {
|
it('en and ja have the same key structure', () => {
|
||||||
const stringKeys = [
|
const stringKeys = [
|
||||||
'interactive.ui.intro',
|
'interactive.ui.intro',
|
||||||
'interactive.ui.cancelled',
|
'interactive.ui.cancelled',
|
||||||
'workflow.iterationLimit.maxReached',
|
'piece.iterationLimit.maxReached',
|
||||||
'workflow.notifyComplete',
|
'piece.notifyComplete',
|
||||||
'workflow.sigintGraceful',
|
'piece.sigintGraceful',
|
||||||
];
|
];
|
||||||
for (const key of stringKeys) {
|
for (const key of stringKeys) {
|
||||||
expect(() => getLabel(key, 'en')).not.toThrow();
|
expect(() => getLabel(key, 'en')).not.toThrow();
|
||||||
|
|||||||
@ -12,22 +12,22 @@ import {
|
|||||||
type ReportInstructionContext,
|
type ReportInstructionContext,
|
||||||
type StatusJudgmentContext,
|
type StatusJudgmentContext,
|
||||||
type InstructionContext,
|
type InstructionContext,
|
||||||
} from '../core/workflow/index.js';
|
} from '../core/piece/index.js';
|
||||||
|
|
||||||
// Function wrappers for test readability
|
// Function wrappers for test readability
|
||||||
function buildInstruction(step: WorkflowMovement, ctx: InstructionContext): string {
|
function buildInstruction(step: PieceMovement, ctx: InstructionContext): string {
|
||||||
return new InstructionBuilder(step, ctx).build();
|
return new InstructionBuilder(step, ctx).build();
|
||||||
}
|
}
|
||||||
function buildReportInstruction(step: WorkflowMovement, ctx: ReportInstructionContext): string {
|
function buildReportInstruction(step: PieceMovement, ctx: ReportInstructionContext): string {
|
||||||
return new ReportInstructionBuilder(step, ctx).build();
|
return new ReportInstructionBuilder(step, ctx).build();
|
||||||
}
|
}
|
||||||
function buildStatusJudgmentInstruction(step: WorkflowMovement, ctx: StatusJudgmentContext): string {
|
function buildStatusJudgmentInstruction(step: PieceMovement, ctx: StatusJudgmentContext): string {
|
||||||
return new StatusJudgmentBuilder(step, ctx).build();
|
return new StatusJudgmentBuilder(step, ctx).build();
|
||||||
}
|
}
|
||||||
import type { WorkflowMovement, WorkflowRule } from '../core/models/index.js';
|
import type { PieceMovement, PieceRule } from '../core/models/index.js';
|
||||||
|
|
||||||
|
|
||||||
function createMinimalStep(template: string): WorkflowMovement {
|
function createMinimalStep(template: string): PieceMovement {
|
||||||
return {
|
return {
|
||||||
name: 'test-step',
|
name: 'test-step',
|
||||||
agent: 'test-agent',
|
agent: 'test-agent',
|
||||||
@ -186,7 +186,7 @@ describe('instruction-builder', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
describe('generateStatusRulesComponents', () => {
|
describe('generateStatusRulesComponents', () => {
|
||||||
const rules: WorkflowRule[] = [
|
const rules: PieceRule[] = [
|
||||||
{ condition: '要件が明確で実装可能', next: 'implement' },
|
{ condition: '要件が明確で実装可能', next: 'implement' },
|
||||||
{ condition: 'ユーザーが質問をしている', next: 'COMPLETE' },
|
{ condition: 'ユーザーが質問をしている', next: 'COMPLETE' },
|
||||||
{ condition: '要件が不明確、情報不足', next: 'ABORT', appendix: '確認事項:\n- {質問1}\n- {質問2}' },
|
{ condition: '要件が不明確、情報不足', next: 'ABORT', appendix: '確認事項:\n- {質問1}\n- {質問2}' },
|
||||||
@ -201,7 +201,7 @@ describe('instruction-builder', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should generate criteria table with numbered tags (en)', () => {
|
it('should generate criteria table with numbered tags (en)', () => {
|
||||||
const enRules: WorkflowRule[] = [
|
const enRules: PieceRule[] = [
|
||||||
{ condition: 'Requirements are clear', next: 'implement' },
|
{ condition: 'Requirements are clear', next: 'implement' },
|
||||||
{ condition: 'User is asking a question', next: 'COMPLETE' },
|
{ condition: 'User is asking a question', next: 'COMPLETE' },
|
||||||
];
|
];
|
||||||
@ -229,7 +229,7 @@ describe('instruction-builder', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should not generate appendix when no rules have appendix', () => {
|
it('should not generate appendix when no rules have appendix', () => {
|
||||||
const noAppendixRules: WorkflowRule[] = [
|
const noAppendixRules: PieceRule[] = [
|
||||||
{ condition: 'Done', next: 'review' },
|
{ condition: 'Done', next: 'review' },
|
||||||
{ condition: 'Blocked', next: 'plan' },
|
{ condition: 'Blocked', next: 'plan' },
|
||||||
];
|
];
|
||||||
@ -248,7 +248,7 @@ describe('instruction-builder', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should omit interactive-only rules when interactive is false', () => {
|
it('should omit interactive-only rules when interactive is false', () => {
|
||||||
const filteredRules: WorkflowRule[] = [
|
const filteredRules: PieceRule[] = [
|
||||||
{ condition: 'Clear', next: 'implement' },
|
{ condition: 'Clear', next: 'implement' },
|
||||||
{ condition: 'User input required', next: 'implement', interactiveOnly: true },
|
{ condition: 'User input required', next: 'implement', interactiveOnly: true },
|
||||||
{ condition: 'Blocked', next: 'plan' },
|
{ condition: 'Blocked', next: 'plan' },
|
||||||
@ -298,7 +298,7 @@ describe('instruction-builder', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('auto-injected Workflow Context section', () => {
|
describe('auto-injected Piece Context section', () => {
|
||||||
it('should include iteration, step iteration, and step name', () => {
|
it('should include iteration, step iteration, and step name', () => {
|
||||||
const step = createMinimalStep('Do work');
|
const step = createMinimalStep('Do work');
|
||||||
step.name = 'implement';
|
step.name = 'implement';
|
||||||
@ -311,7 +311,7 @@ describe('instruction-builder', () => {
|
|||||||
|
|
||||||
const result = buildInstruction(step, context);
|
const result = buildInstruction(step, context);
|
||||||
|
|
||||||
expect(result).toContain('## Workflow Context');
|
expect(result).toContain('## Piece Context');
|
||||||
expect(result).toContain('- Iteration: 3/20');
|
expect(result).toContain('- Iteration: 3/20');
|
||||||
expect(result).toContain('- Movement Iteration: 2');
|
expect(result).toContain('- Movement Iteration: 2');
|
||||||
expect(result).toContain('- Movement: implement');
|
expect(result).toContain('- Movement: implement');
|
||||||
@ -328,7 +328,7 @@ describe('instruction-builder', () => {
|
|||||||
|
|
||||||
const result = buildInstruction(step, context);
|
const result = buildInstruction(step, context);
|
||||||
|
|
||||||
expect(result).toContain('## Workflow Context');
|
expect(result).toContain('## Piece Context');
|
||||||
expect(result).toContain('Report Directory');
|
expect(result).toContain('Report Directory');
|
||||||
expect(result).toContain('Report File');
|
expect(result).toContain('Report File');
|
||||||
expect(result).toContain('Phase 1');
|
expect(result).toContain('Phase 1');
|
||||||
@ -380,12 +380,12 @@ describe('instruction-builder', () => {
|
|||||||
expect(result).toContain('- Movement Iteration: 3(このムーブメントの実行回数)');
|
expect(result).toContain('- Movement Iteration: 3(このムーブメントの実行回数)');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should include workflow structure when workflowSteps is provided', () => {
|
it('should include piece structure when pieceSteps is provided', () => {
|
||||||
const step = createMinimalStep('Do work');
|
const step = createMinimalStep('Do work');
|
||||||
step.name = 'implement';
|
step.name = 'implement';
|
||||||
const context = createMinimalContext({
|
const context = createMinimalContext({
|
||||||
language: 'en',
|
language: 'en',
|
||||||
workflowMovements: [
|
pieceMovements: [
|
||||||
{ name: 'plan' },
|
{ name: 'plan' },
|
||||||
{ name: 'implement' },
|
{ name: 'implement' },
|
||||||
{ name: 'review' },
|
{ name: 'review' },
|
||||||
@ -395,7 +395,7 @@ describe('instruction-builder', () => {
|
|||||||
|
|
||||||
const result = buildInstruction(step, context);
|
const result = buildInstruction(step, context);
|
||||||
|
|
||||||
expect(result).toContain('This workflow consists of 3 movements:');
|
expect(result).toContain('This piece consists of 3 movements:');
|
||||||
expect(result).toContain('- Movement 1: plan');
|
expect(result).toContain('- Movement 1: plan');
|
||||||
expect(result).toContain('- Movement 2: implement');
|
expect(result).toContain('- Movement 2: implement');
|
||||||
expect(result).toContain('← current');
|
expect(result).toContain('← current');
|
||||||
@ -407,7 +407,7 @@ describe('instruction-builder', () => {
|
|||||||
step.name = 'plan';
|
step.name = 'plan';
|
||||||
const context = createMinimalContext({
|
const context = createMinimalContext({
|
||||||
language: 'en',
|
language: 'en',
|
||||||
workflowMovements: [
|
pieceMovements: [
|
||||||
{ name: 'plan' },
|
{ name: 'plan' },
|
||||||
{ name: 'implement' },
|
{ name: 'implement' },
|
||||||
],
|
],
|
||||||
@ -425,7 +425,7 @@ describe('instruction-builder', () => {
|
|||||||
step.name = 'plan';
|
step.name = 'plan';
|
||||||
const context = createMinimalContext({
|
const context = createMinimalContext({
|
||||||
language: 'ja',
|
language: 'ja',
|
||||||
workflowMovements: [
|
pieceMovements: [
|
||||||
{ name: 'plan', description: 'タスクを分析し実装計画を作成する' },
|
{ name: 'plan', description: 'タスクを分析し実装計画を作成する' },
|
||||||
{ name: 'implement' },
|
{ name: 'implement' },
|
||||||
],
|
],
|
||||||
@ -437,34 +437,34 @@ describe('instruction-builder', () => {
|
|||||||
expect(result).toContain('- Movement 1: plan(タスクを分析し実装計画を作成する) ← 現在');
|
expect(result).toContain('- Movement 1: plan(タスクを分析し実装計画を作成する) ← 現在');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should skip workflow structure when workflowSteps is not provided', () => {
|
it('should skip piece structure when pieceSteps is not provided', () => {
|
||||||
const step = createMinimalStep('Do work');
|
const step = createMinimalStep('Do work');
|
||||||
const context = createMinimalContext({ language: 'en' });
|
const context = createMinimalContext({ language: 'en' });
|
||||||
|
|
||||||
const result = buildInstruction(step, context);
|
const result = buildInstruction(step, context);
|
||||||
|
|
||||||
expect(result).not.toContain('This workflow consists of');
|
expect(result).not.toContain('This piece consists of');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should skip workflow structure when workflowSteps is empty', () => {
|
it('should skip piece structure when pieceSteps is empty', () => {
|
||||||
const step = createMinimalStep('Do work');
|
const step = createMinimalStep('Do work');
|
||||||
const context = createMinimalContext({
|
const context = createMinimalContext({
|
||||||
language: 'en',
|
language: 'en',
|
||||||
workflowMovements: [],
|
pieceMovements: [],
|
||||||
currentMovementIndex: -1,
|
currentMovementIndex: -1,
|
||||||
});
|
});
|
||||||
|
|
||||||
const result = buildInstruction(step, context);
|
const result = buildInstruction(step, context);
|
||||||
|
|
||||||
expect(result).not.toContain('This workflow consists of');
|
expect(result).not.toContain('This piece consists of');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should render workflow structure in Japanese', () => {
|
it('should render piece structure in Japanese', () => {
|
||||||
const step = createMinimalStep('Do work');
|
const step = createMinimalStep('Do work');
|
||||||
step.name = 'plan';
|
step.name = 'plan';
|
||||||
const context = createMinimalContext({
|
const context = createMinimalContext({
|
||||||
language: 'ja',
|
language: 'ja',
|
||||||
workflowMovements: [
|
pieceMovements: [
|
||||||
{ name: 'plan' },
|
{ name: 'plan' },
|
||||||
{ name: 'implement' },
|
{ name: 'implement' },
|
||||||
],
|
],
|
||||||
@ -473,7 +473,7 @@ describe('instruction-builder', () => {
|
|||||||
|
|
||||||
const result = buildInstruction(step, context);
|
const result = buildInstruction(step, context);
|
||||||
|
|
||||||
expect(result).toContain('このワークフローは2ムーブメントで構成されています:');
|
expect(result).toContain('このピースは2ムーブメントで構成されています:');
|
||||||
expect(result).toContain('← 現在');
|
expect(result).toContain('← 現在');
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -482,7 +482,7 @@ describe('instruction-builder', () => {
|
|||||||
step.name = 'sub-step';
|
step.name = 'sub-step';
|
||||||
const context = createMinimalContext({
|
const context = createMinimalContext({
|
||||||
language: 'en',
|
language: 'en',
|
||||||
workflowMovements: [
|
pieceMovements: [
|
||||||
{ name: 'plan' },
|
{ name: 'plan' },
|
||||||
{ name: 'implement' },
|
{ name: 'implement' },
|
||||||
],
|
],
|
||||||
@ -491,7 +491,7 @@ describe('instruction-builder', () => {
|
|||||||
|
|
||||||
const result = buildInstruction(step, context);
|
const result = buildInstruction(step, context);
|
||||||
|
|
||||||
expect(result).toContain('This workflow consists of 2 movements:');
|
expect(result).toContain('This piece consists of 2 movements:');
|
||||||
expect(result).not.toContain('← current');
|
expect(result).not.toContain('← current');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@ -6,7 +6,7 @@ import { describe, it, expect, vi, beforeEach } from 'vitest';
|
|||||||
|
|
||||||
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
||||||
loadGlobalConfig: vi.fn(() => ({ provider: 'mock', language: 'en' })),
|
loadGlobalConfig: vi.fn(() => ({ provider: 'mock', language: 'en' })),
|
||||||
getBuiltinWorkflowsEnabled: vi.fn().mockReturnValue(true),
|
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../infra/providers/index.js', () => ({
|
vi.mock('../infra/providers/index.js', () => ({
|
||||||
|
|||||||
@ -5,7 +5,7 @@
|
|||||||
* loop detection, scenario queue exhaustion, and movement execution exceptions.
|
* loop detection, scenario queue exhaustion, and movement execution exceptions.
|
||||||
*
|
*
|
||||||
* Mocked: UI, session, phase-runner, notifications, config, callAiJudge
|
* Mocked: UI, session, phase-runner, notifications, config, callAiJudge
|
||||||
* Not mocked: WorkflowEngine, runAgent, detectMatchedRule, rule-evaluator
|
* Not mocked: PieceEngine, runAgent, detectMatchedRule, rule-evaluator
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
@ -13,7 +13,7 @@ import { mkdtempSync, mkdirSync, writeFileSync, rmSync } from 'node:fs';
|
|||||||
import { join } from 'node:path';
|
import { join } from 'node:path';
|
||||||
import { tmpdir } from 'node:os';
|
import { tmpdir } from 'node:os';
|
||||||
import { setMockScenario, resetScenario } from '../infra/mock/index.js';
|
import { setMockScenario, resetScenario } from '../infra/mock/index.js';
|
||||||
import type { WorkflowConfig, WorkflowMovement, WorkflowRule } from '../core/models/index.js';
|
import type { PieceConfig, PieceMovement, PieceRule } from '../core/models/index.js';
|
||||||
import { callAiJudge, detectRuleIndex } from '../infra/claude/index.js';
|
import { callAiJudge, detectRuleIndex } from '../infra/claude/index.js';
|
||||||
|
|
||||||
// --- Mocks ---
|
// --- Mocks ---
|
||||||
@ -26,7 +26,7 @@ vi.mock('../infra/claude/client.js', async (importOriginal) => {
|
|||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
vi.mock('../core/workflow/phase-runner.js', () => ({
|
vi.mock('../core/piece/phase-runner.js', () => ({
|
||||||
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
||||||
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
||||||
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
||||||
@ -42,7 +42,7 @@ vi.mock('../infra/config/global/globalConfig.js', () => ({
|
|||||||
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
||||||
getLanguage: vi.fn().mockReturnValue('en'),
|
getLanguage: vi.fn().mockReturnValue('en'),
|
||||||
getDisabledBuiltins: vi.fn().mockReturnValue([]),
|
getDisabledBuiltins: vi.fn().mockReturnValue([]),
|
||||||
getBuiltinWorkflowsEnabled: vi.fn().mockReturnValue(true),
|
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../infra/config/project/projectConfig.js', () => ({
|
vi.mock('../infra/config/project/projectConfig.js', () => ({
|
||||||
@ -51,15 +51,15 @@ vi.mock('../infra/config/project/projectConfig.js', () => ({
|
|||||||
|
|
||||||
// --- Imports (after mocks) ---
|
// --- Imports (after mocks) ---
|
||||||
|
|
||||||
import { WorkflowEngine } from '../core/workflow/index.js';
|
import { PieceEngine } from '../core/piece/index.js';
|
||||||
|
|
||||||
// --- Test helpers ---
|
// --- Test helpers ---
|
||||||
|
|
||||||
function makeRule(condition: string, next: string): WorkflowRule {
|
function makeRule(condition: string, next: string): PieceRule {
|
||||||
return { condition, next };
|
return { condition, next };
|
||||||
}
|
}
|
||||||
|
|
||||||
function makeMovement(name: string, agentPath: string, rules: WorkflowRule[]): WorkflowMovement {
|
function makeMovement(name: string, agentPath: string, rules: PieceRule[]): PieceMovement {
|
||||||
return {
|
return {
|
||||||
name,
|
name,
|
||||||
agent: `./agents/${name}.md`,
|
agent: `./agents/${name}.md`,
|
||||||
@ -98,10 +98,10 @@ function buildEngineOptions(projectCwd: string) {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
function buildWorkflow(agentPaths: Record<string, string>, maxIterations: number): WorkflowConfig {
|
function buildPiece(agentPaths: Record<string, string>, maxIterations: number): PieceConfig {
|
||||||
return {
|
return {
|
||||||
name: 'it-error',
|
name: 'it-error',
|
||||||
description: 'IT error recovery workflow',
|
description: 'IT error recovery piece',
|
||||||
maxIterations,
|
maxIterations,
|
||||||
initialMovement: 'plan',
|
initialMovement: 'plan',
|
||||||
movements: [
|
movements: [
|
||||||
@ -142,15 +142,15 @@ describe('Error Recovery IT: agent blocked response', () => {
|
|||||||
{ agent: 'plan', status: 'blocked', content: 'Error: Agent is blocked.' },
|
{ agent: 'plan', status: 'blocked', content: 'Error: Agent is blocked.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config = buildWorkflow(agentPaths, 10);
|
const config = buildPiece(agentPaths, 10);
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Test task', {
|
const engine = new PieceEngine(config, testDir, 'Test task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
|
|
||||||
const state = await engine.run();
|
const state = await engine.run();
|
||||||
|
|
||||||
// Blocked agent should result in workflow abort
|
// Blocked agent should result in piece abort
|
||||||
expect(state.status).toBe('aborted');
|
expect(state.status).toBe('aborted');
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -159,8 +159,8 @@ describe('Error Recovery IT: agent blocked response', () => {
|
|||||||
{ agent: 'plan', status: 'done', content: '' },
|
{ agent: 'plan', status: 'done', content: '' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config = buildWorkflow(agentPaths, 10);
|
const config = buildPiece(agentPaths, 10);
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Test task', {
|
const engine = new PieceEngine(config, testDir, 'Test task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -189,15 +189,15 @@ describe('Error Recovery IT: max iterations reached', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should abort when max iterations reached (tight limit)', async () => {
|
it('should abort when max iterations reached (tight limit)', async () => {
|
||||||
// Only 2 iterations allowed, but workflow needs 3 movements
|
// Only 2 iterations allowed, but piece needs 3 movements
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
{ agent: 'plan', status: 'done', content: '[PLAN:1]\n\nClear.' },
|
{ agent: 'plan', status: 'done', content: '[PLAN:1]\n\nClear.' },
|
||||||
{ agent: 'implement', status: 'done', content: '[IMPLEMENT:1]\n\nDone.' },
|
{ agent: 'implement', status: 'done', content: '[IMPLEMENT:1]\n\nDone.' },
|
||||||
{ agent: 'review', status: 'done', content: '[REVIEW:1]\n\nPassed.' },
|
{ agent: 'review', status: 'done', content: '[REVIEW:1]\n\nPassed.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config = buildWorkflow(agentPaths, 2);
|
const config = buildPiece(agentPaths, 2);
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Task', {
|
const engine = new PieceEngine(config, testDir, 'Task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -216,8 +216,8 @@ describe('Error Recovery IT: max iterations reached', () => {
|
|||||||
}));
|
}));
|
||||||
setMockScenario(loopScenario);
|
setMockScenario(loopScenario);
|
||||||
|
|
||||||
const config = buildWorkflow(agentPaths, 4);
|
const config = buildPiece(agentPaths, 4);
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Looping task', {
|
const engine = new PieceEngine(config, testDir, 'Looping task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -245,14 +245,14 @@ describe('Error Recovery IT: scenario queue exhaustion', () => {
|
|||||||
rmSync(testDir, { recursive: true, force: true });
|
rmSync(testDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should handle scenario queue exhaustion mid-workflow', async () => {
|
it('should handle scenario queue exhaustion mid-piece', async () => {
|
||||||
// Only 1 entry, but workflow needs 3 movements
|
// Only 1 entry, but piece needs 3 movements
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
{ agent: 'plan', status: 'done', content: '[PLAN:1]\n\nClear.' },
|
{ agent: 'plan', status: 'done', content: '[PLAN:1]\n\nClear.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config = buildWorkflow(agentPaths, 10);
|
const config = buildPiece(agentPaths, 10);
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Task', {
|
const engine = new PieceEngine(config, testDir, 'Task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -281,21 +281,21 @@ describe('Error Recovery IT: movement events on error paths', () => {
|
|||||||
rmSync(testDir, { recursive: true, force: true });
|
rmSync(testDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should emit workflow:abort event with reason on max iterations', async () => {
|
it('should emit piece:abort event with reason on max iterations', async () => {
|
||||||
const loopScenario = Array.from({ length: 6 }, (_, i) => ({
|
const loopScenario = Array.from({ length: 6 }, (_, i) => ({
|
||||||
status: 'done' as const,
|
status: 'done' as const,
|
||||||
content: i % 2 === 0 ? '[PLAN:1]\n\nClear.' : '[IMPLEMENT:2]\n\nCannot proceed.',
|
content: i % 2 === 0 ? '[PLAN:1]\n\nClear.' : '[IMPLEMENT:2]\n\nCannot proceed.',
|
||||||
}));
|
}));
|
||||||
setMockScenario(loopScenario);
|
setMockScenario(loopScenario);
|
||||||
|
|
||||||
const config = buildWorkflow(agentPaths, 3);
|
const config = buildPiece(agentPaths, 3);
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Task', {
|
const engine = new PieceEngine(config, testDir, 'Task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
|
|
||||||
let abortReason: string | undefined;
|
let abortReason: string | undefined;
|
||||||
engine.on('workflow:abort', (_state, reason) => {
|
engine.on('piece:abort', (_state, reason) => {
|
||||||
abortReason = reason;
|
abortReason = reason;
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -309,8 +309,8 @@ describe('Error Recovery IT: movement events on error paths', () => {
|
|||||||
{ agent: 'plan', status: 'done', content: '[PLAN:2]\n\nRequirements unclear.' },
|
{ agent: 'plan', status: 'done', content: '[PLAN:2]\n\nRequirements unclear.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config = buildWorkflow(agentPaths, 10);
|
const config = buildPiece(agentPaths, 10);
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Task', {
|
const engine = new PieceEngine(config, testDir, 'Task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -348,7 +348,7 @@ describe('Error Recovery IT: programmatic abort', () => {
|
|||||||
rmSync(testDir, { recursive: true, force: true });
|
rmSync(testDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should support engine.abort() to cancel running workflow', async () => {
|
it('should support engine.abort() to cancel running piece', async () => {
|
||||||
// Provide enough scenarios for 3 steps
|
// Provide enough scenarios for 3 steps
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
{ agent: 'plan', status: 'done', content: '[PLAN:1]\n\nClear.' },
|
{ agent: 'plan', status: 'done', content: '[PLAN:1]\n\nClear.' },
|
||||||
@ -356,8 +356,8 @@ describe('Error Recovery IT: programmatic abort', () => {
|
|||||||
{ agent: 'review', status: 'done', content: '[REVIEW:1]\n\nPassed.' },
|
{ agent: 'review', status: 'done', content: '[REVIEW:1]\n\nPassed.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config = buildWorkflow(agentPaths, 10);
|
const config = buildPiece(agentPaths, 10);
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Task', {
|
const engine = new PieceEngine(config, testDir, 'Task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
|
|||||||
@ -2,43 +2,43 @@
|
|||||||
* Instruction builder integration tests.
|
* Instruction builder integration tests.
|
||||||
*
|
*
|
||||||
* Tests template variable expansion and auto-injection in buildInstruction().
|
* Tests template variable expansion and auto-injection in buildInstruction().
|
||||||
* Uses real workflow movement configs (not mocked) against the buildInstruction function.
|
* Uses real piece movement configs (not mocked) against the buildInstruction function.
|
||||||
*
|
*
|
||||||
* Not mocked: buildInstruction, buildReportInstruction, buildStatusJudgmentInstruction
|
* Not mocked: buildInstruction, buildReportInstruction, buildStatusJudgmentInstruction
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, vi } from 'vitest';
|
import { describe, it, expect, vi } from 'vitest';
|
||||||
import type { WorkflowMovement, WorkflowRule, AgentResponse } from '../core/models/index.js';
|
import type { PieceMovement, PieceRule, AgentResponse } from '../core/models/index.js';
|
||||||
|
|
||||||
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
||||||
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
||||||
getLanguage: vi.fn().mockReturnValue('en'),
|
getLanguage: vi.fn().mockReturnValue('en'),
|
||||||
getBuiltinWorkflowsEnabled: vi.fn().mockReturnValue(true),
|
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
import { InstructionBuilder } from '../core/workflow/index.js';
|
import { InstructionBuilder } from '../core/piece/index.js';
|
||||||
import { ReportInstructionBuilder, type ReportInstructionContext } from '../core/workflow/index.js';
|
import { ReportInstructionBuilder, type ReportInstructionContext } from '../core/piece/index.js';
|
||||||
import { StatusJudgmentBuilder, type StatusJudgmentContext } from '../core/workflow/index.js';
|
import { StatusJudgmentBuilder, type StatusJudgmentContext } from '../core/piece/index.js';
|
||||||
import type { InstructionContext } from '../core/workflow/index.js';
|
import type { InstructionContext } from '../core/piece/index.js';
|
||||||
|
|
||||||
// Function wrappers for test readability
|
// Function wrappers for test readability
|
||||||
function buildInstruction(movement: WorkflowMovement, ctx: InstructionContext): string {
|
function buildInstruction(movement: PieceMovement, ctx: InstructionContext): string {
|
||||||
return new InstructionBuilder(movement, ctx).build();
|
return new InstructionBuilder(movement, ctx).build();
|
||||||
}
|
}
|
||||||
function buildReportInstruction(movement: WorkflowMovement, ctx: ReportInstructionContext): string {
|
function buildReportInstruction(movement: PieceMovement, ctx: ReportInstructionContext): string {
|
||||||
return new ReportInstructionBuilder(movement, ctx).build();
|
return new ReportInstructionBuilder(movement, ctx).build();
|
||||||
}
|
}
|
||||||
function buildStatusJudgmentInstruction(movement: WorkflowMovement, ctx: StatusJudgmentContext): string {
|
function buildStatusJudgmentInstruction(movement: PieceMovement, ctx: StatusJudgmentContext): string {
|
||||||
return new StatusJudgmentBuilder(movement, ctx).build();
|
return new StatusJudgmentBuilder(movement, ctx).build();
|
||||||
}
|
}
|
||||||
|
|
||||||
// --- Test helpers ---
|
// --- Test helpers ---
|
||||||
|
|
||||||
function makeRule(condition: string, next: string, extra?: Partial<WorkflowRule>): WorkflowRule {
|
function makeRule(condition: string, next: string, extra?: Partial<PieceRule>): PieceRule {
|
||||||
return { condition, next, ...extra };
|
return { condition, next, ...extra };
|
||||||
}
|
}
|
||||||
|
|
||||||
function makeMovement(overrides: Partial<WorkflowMovement> = {}): WorkflowMovement {
|
function makeMovement(overrides: Partial<PieceMovement> = {}): PieceMovement {
|
||||||
return {
|
return {
|
||||||
name: 'test-step',
|
name: 'test-step',
|
||||||
agent: 'test-agent',
|
agent: 'test-agent',
|
||||||
@ -187,7 +187,7 @@ describe('Instruction Builder IT: iteration variables', () => {
|
|||||||
expect(result).toContain('Iter: 5/30, movement iter: 2');
|
expect(result).toContain('Iter: 5/30, movement iter: 2');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should include iteration in Workflow Context section', () => {
|
it('should include iteration in Piece Context section', () => {
|
||||||
const step = makeMovement();
|
const step = makeMovement();
|
||||||
const ctx = makeContext({ iteration: 7, maxIterations: 20, movementIteration: 3 });
|
const ctx = makeContext({ iteration: 7, maxIterations: 20, movementIteration: 3 });
|
||||||
|
|
||||||
|
|||||||
@ -1,12 +1,12 @@
|
|||||||
/**
|
/**
|
||||||
* Workflow execution integration tests.
|
* Piece execution integration tests.
|
||||||
*
|
*
|
||||||
* Tests WorkflowEngine with real runAgent + MockProvider + ScenarioQueue.
|
* Tests PieceEngine with real runAgent + MockProvider + ScenarioQueue.
|
||||||
* No vi.mock on runAgent or detectMatchedRule — rules are matched via
|
* No vi.mock on runAgent or detectMatchedRule — rules are matched via
|
||||||
* [MOVEMENT_NAME:N] tags in scenario content (tag-based detection).
|
* [MOVEMENT_NAME:N] tags in scenario content (tag-based detection).
|
||||||
*
|
*
|
||||||
* Mocked: UI, session, phase-runner (report/judgment phases), notifications, config
|
* Mocked: UI, session, phase-runner (report/judgment phases), notifications, config
|
||||||
* Not mocked: WorkflowEngine, runAgent, detectMatchedRule, rule-evaluator
|
* Not mocked: PieceEngine, runAgent, detectMatchedRule, rule-evaluator
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
@ -14,7 +14,7 @@ import { mkdtempSync, mkdirSync, writeFileSync, rmSync } from 'node:fs';
|
|||||||
import { join } from 'node:path';
|
import { join } from 'node:path';
|
||||||
import { tmpdir } from 'node:os';
|
import { tmpdir } from 'node:os';
|
||||||
import { setMockScenario, resetScenario } from '../infra/mock/index.js';
|
import { setMockScenario, resetScenario } from '../infra/mock/index.js';
|
||||||
import type { WorkflowConfig, WorkflowMovement, WorkflowRule } from '../core/models/index.js';
|
import type { PieceConfig, PieceMovement, PieceRule } from '../core/models/index.js';
|
||||||
import { callAiJudge, detectRuleIndex } from '../infra/claude/index.js';
|
import { callAiJudge, detectRuleIndex } from '../infra/claude/index.js';
|
||||||
|
|
||||||
// --- Mocks (minimal — only infrastructure, not core logic) ---
|
// --- Mocks (minimal — only infrastructure, not core logic) ---
|
||||||
@ -30,7 +30,7 @@ vi.mock('../infra/claude/client.js', async (importOriginal) => {
|
|||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
vi.mock('../core/workflow/phase-runner.js', () => ({
|
vi.mock('../core/piece/phase-runner.js', () => ({
|
||||||
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
||||||
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
||||||
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
||||||
@ -45,7 +45,7 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
|||||||
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
||||||
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
||||||
getLanguage: vi.fn().mockReturnValue('en'),
|
getLanguage: vi.fn().mockReturnValue('en'),
|
||||||
getBuiltinWorkflowsEnabled: vi.fn().mockReturnValue(true),
|
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../infra/config/project/projectConfig.js', () => ({
|
vi.mock('../infra/config/project/projectConfig.js', () => ({
|
||||||
@ -54,15 +54,15 @@ vi.mock('../infra/config/project/projectConfig.js', () => ({
|
|||||||
|
|
||||||
// --- Imports (after mocks) ---
|
// --- Imports (after mocks) ---
|
||||||
|
|
||||||
import { WorkflowEngine } from '../core/workflow/index.js';
|
import { PieceEngine } from '../core/piece/index.js';
|
||||||
|
|
||||||
// --- Test helpers ---
|
// --- Test helpers ---
|
||||||
|
|
||||||
function makeRule(condition: string, next: string): WorkflowRule {
|
function makeRule(condition: string, next: string): PieceRule {
|
||||||
return { condition, next };
|
return { condition, next };
|
||||||
}
|
}
|
||||||
|
|
||||||
function makeMovement(name: string, agentPath: string, rules: WorkflowRule[]): WorkflowMovement {
|
function makeMovement(name: string, agentPath: string, rules: PieceRule[]): PieceMovement {
|
||||||
return {
|
return {
|
||||||
name,
|
name,
|
||||||
agent: `./agents/${name}.md`,
|
agent: `./agents/${name}.md`,
|
||||||
@ -100,10 +100,10 @@ function buildEngineOptions(projectCwd: string) {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
function buildSimpleWorkflow(agentPaths: Record<string, string>): WorkflowConfig {
|
function buildSimplePiece(agentPaths: Record<string, string>): PieceConfig {
|
||||||
return {
|
return {
|
||||||
name: 'it-simple',
|
name: 'it-simple',
|
||||||
description: 'IT simple workflow',
|
description: 'IT simple piece',
|
||||||
maxIterations: 15,
|
maxIterations: 15,
|
||||||
initialMovement: 'plan',
|
initialMovement: 'plan',
|
||||||
movements: [
|
movements: [
|
||||||
@ -123,10 +123,10 @@ function buildSimpleWorkflow(agentPaths: Record<string, string>): WorkflowConfig
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
function buildLoopWorkflow(agentPaths: Record<string, string>): WorkflowConfig {
|
function buildLoopPiece(agentPaths: Record<string, string>): PieceConfig {
|
||||||
return {
|
return {
|
||||||
name: 'it-loop',
|
name: 'it-loop',
|
||||||
description: 'IT workflow with fix loop',
|
description: 'IT piece with fix loop',
|
||||||
maxIterations: 20,
|
maxIterations: 20,
|
||||||
initialMovement: 'plan',
|
initialMovement: 'plan',
|
||||||
movements: [
|
movements: [
|
||||||
@ -154,7 +154,7 @@ function buildLoopWorkflow(agentPaths: Record<string, string>): WorkflowConfig {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
describe('Workflow Engine IT: Happy Path', () => {
|
describe('Piece Engine IT: Happy Path', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
let agentPaths: Record<string, string>;
|
let agentPaths: Record<string, string>;
|
||||||
|
|
||||||
@ -177,8 +177,8 @@ describe('Workflow Engine IT: Happy Path', () => {
|
|||||||
{ agent: 'review', status: 'done', content: '[REVIEW:1]\n\nAll checks passed.' },
|
{ agent: 'review', status: 'done', content: '[REVIEW:1]\n\nAll checks passed.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config = buildSimpleWorkflow(agentPaths);
|
const config = buildSimplePiece(agentPaths);
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Test task', {
|
const engine = new PieceEngine(config, testDir, 'Test task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -194,8 +194,8 @@ describe('Workflow Engine IT: Happy Path', () => {
|
|||||||
{ agent: 'plan', status: 'done', content: '[PLAN:2]\n\nRequirements unclear.' },
|
{ agent: 'plan', status: 'done', content: '[PLAN:2]\n\nRequirements unclear.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config = buildSimpleWorkflow(agentPaths);
|
const config = buildSimplePiece(agentPaths);
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Vague task', {
|
const engine = new PieceEngine(config, testDir, 'Vague task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -207,7 +207,7 @@ describe('Workflow Engine IT: Happy Path', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Workflow Engine IT: Fix Loop', () => {
|
describe('Piece Engine IT: Fix Loop', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
let agentPaths: Record<string, string>;
|
let agentPaths: Record<string, string>;
|
||||||
|
|
||||||
@ -237,8 +237,8 @@ describe('Workflow Engine IT: Fix Loop', () => {
|
|||||||
{ agent: 'supervise', status: 'done', content: '[SUPERVISE:1]\n\nAll checks passed.' },
|
{ agent: 'supervise', status: 'done', content: '[SUPERVISE:1]\n\nAll checks passed.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config = buildLoopWorkflow(agentPaths);
|
const config = buildLoopPiece(agentPaths);
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Task needing fix', {
|
const engine = new PieceEngine(config, testDir, 'Task needing fix', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -257,8 +257,8 @@ describe('Workflow Engine IT: Fix Loop', () => {
|
|||||||
{ agent: 'fix', status: 'done', content: '[FIX:2]\n\nCannot fix.' },
|
{ agent: 'fix', status: 'done', content: '[FIX:2]\n\nCannot fix.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config = buildLoopWorkflow(agentPaths);
|
const config = buildLoopPiece(agentPaths);
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Unfixable task', {
|
const engine = new PieceEngine(config, testDir, 'Unfixable task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -269,7 +269,7 @@ describe('Workflow Engine IT: Fix Loop', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Workflow Engine IT: Max Iterations', () => {
|
describe('Piece Engine IT: Max Iterations', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
let agentPaths: Record<string, string>;
|
let agentPaths: Record<string, string>;
|
||||||
|
|
||||||
@ -293,10 +293,10 @@ describe('Workflow Engine IT: Max Iterations', () => {
|
|||||||
}));
|
}));
|
||||||
setMockScenario(infiniteScenario);
|
setMockScenario(infiniteScenario);
|
||||||
|
|
||||||
const config = buildSimpleWorkflow(agentPaths);
|
const config = buildSimplePiece(agentPaths);
|
||||||
config.maxIterations = 5;
|
config.maxIterations = 5;
|
||||||
|
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Looping task', {
|
const engine = new PieceEngine(config, testDir, 'Looping task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -308,7 +308,7 @@ describe('Workflow Engine IT: Max Iterations', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Workflow Engine IT: Movement Output Tracking', () => {
|
describe('Piece Engine IT: Movement Output Tracking', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
let agentPaths: Record<string, string>;
|
let agentPaths: Record<string, string>;
|
||||||
|
|
||||||
@ -331,8 +331,8 @@ describe('Workflow Engine IT: Movement Output Tracking', () => {
|
|||||||
{ agent: 'review', status: 'done', content: '[REVIEW:1]\n\nReview output.' },
|
{ agent: 'review', status: 'done', content: '[REVIEW:1]\n\nReview output.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config = buildSimpleWorkflow(agentPaths);
|
const config = buildSimplePiece(agentPaths);
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Track outputs', {
|
const engine = new PieceEngine(config, testDir, 'Track outputs', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -1,11 +1,11 @@
|
|||||||
/**
|
/**
|
||||||
* Workflow loader integration tests.
|
* Piece loader integration tests.
|
||||||
*
|
*
|
||||||
* Tests the 3-tier workflow resolution (project-local → user → builtin)
|
* Tests the 3-tier piece resolution (project-local → user → builtin)
|
||||||
* and YAML parsing including special rule syntax (ai(), all(), any()).
|
* and YAML parsing including special rule syntax (ai(), all(), any()).
|
||||||
*
|
*
|
||||||
* Mocked: globalConfig (for language/builtins)
|
* Mocked: globalConfig (for language/builtins)
|
||||||
* Not mocked: loadWorkflow, parseWorkflow, rule parsing
|
* Not mocked: loadPiece, parsePiece, rule parsing
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
@ -19,12 +19,12 @@ vi.mock('../infra/config/global/globalConfig.js', () => ({
|
|||||||
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
||||||
getLanguage: vi.fn().mockReturnValue('en'),
|
getLanguage: vi.fn().mockReturnValue('en'),
|
||||||
getDisabledBuiltins: vi.fn().mockReturnValue([]),
|
getDisabledBuiltins: vi.fn().mockReturnValue([]),
|
||||||
getBuiltinWorkflowsEnabled: vi.fn().mockReturnValue(true),
|
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
// --- Imports (after mocks) ---
|
// --- Imports (after mocks) ---
|
||||||
|
|
||||||
import { loadWorkflow } from '../infra/config/index.js';
|
import { loadPiece } from '../infra/config/index.js';
|
||||||
|
|
||||||
// --- Test helpers ---
|
// --- Test helpers ---
|
||||||
|
|
||||||
@ -34,7 +34,7 @@ function createTestDir(): string {
|
|||||||
return dir;
|
return dir;
|
||||||
}
|
}
|
||||||
|
|
||||||
describe('Workflow Loader IT: builtin workflow loading', () => {
|
describe('Piece Loader IT: builtin piece loading', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -48,8 +48,8 @@ describe('Workflow Loader IT: builtin workflow loading', () => {
|
|||||||
const builtinNames = ['default', 'minimal', 'expert', 'expert-cqrs', 'research', 'magi', 'review-only', 'review-fix-minimal'];
|
const builtinNames = ['default', 'minimal', 'expert', 'expert-cqrs', 'research', 'magi', 'review-only', 'review-fix-minimal'];
|
||||||
|
|
||||||
for (const name of builtinNames) {
|
for (const name of builtinNames) {
|
||||||
it(`should load builtin workflow: ${name}`, () => {
|
it(`should load builtin piece: ${name}`, () => {
|
||||||
const config = loadWorkflow(name, testDir);
|
const config = loadPiece(name, testDir);
|
||||||
|
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
expect(config!.name).toBe(name);
|
expect(config!.name).toBe(name);
|
||||||
@ -59,13 +59,13 @@ describe('Workflow Loader IT: builtin workflow loading', () => {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
it('should return null for non-existent workflow', () => {
|
it('should return null for non-existent piece', () => {
|
||||||
const config = loadWorkflow('non-existent-workflow-xyz', testDir);
|
const config = loadPiece('non-existent-piece-xyz', testDir);
|
||||||
expect(config).toBeNull();
|
expect(config).toBeNull();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Workflow Loader IT: project-local workflow override', () => {
|
describe('Piece Loader IT: project-local piece override', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -76,17 +76,17 @@ describe('Workflow Loader IT: project-local workflow override', () => {
|
|||||||
rmSync(testDir, { recursive: true, force: true });
|
rmSync(testDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should load project-local workflow from .takt/workflows/', () => {
|
it('should load project-local piece from .takt/pieces/', () => {
|
||||||
const workflowsDir = join(testDir, '.takt', 'workflows');
|
const piecesDir = join(testDir, '.takt', 'pieces');
|
||||||
mkdirSync(workflowsDir, { recursive: true });
|
mkdirSync(piecesDir, { recursive: true });
|
||||||
|
|
||||||
const agentsDir = join(testDir, 'agents');
|
const agentsDir = join(testDir, 'agents');
|
||||||
mkdirSync(agentsDir, { recursive: true });
|
mkdirSync(agentsDir, { recursive: true });
|
||||||
writeFileSync(join(agentsDir, 'custom.md'), 'Custom agent');
|
writeFileSync(join(agentsDir, 'custom.md'), 'Custom agent');
|
||||||
|
|
||||||
writeFileSync(join(workflowsDir, 'custom-wf.yaml'), `
|
writeFileSync(join(piecesDir, 'custom-wf.yaml'), `
|
||||||
name: custom-wf
|
name: custom-wf
|
||||||
description: Custom project workflow
|
description: Custom project piece
|
||||||
max_iterations: 5
|
max_iterations: 5
|
||||||
initial_movement: start
|
initial_movement: start
|
||||||
|
|
||||||
@ -99,7 +99,7 @@ movements:
|
|||||||
instruction: "Do the work"
|
instruction: "Do the work"
|
||||||
`);
|
`);
|
||||||
|
|
||||||
const config = loadWorkflow('custom-wf', testDir);
|
const config = loadPiece('custom-wf', testDir);
|
||||||
|
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
expect(config!.name).toBe('custom-wf');
|
expect(config!.name).toBe('custom-wf');
|
||||||
@ -108,7 +108,7 @@ movements:
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Workflow Loader IT: agent path resolution', () => {
|
describe('Piece Loader IT: agent path resolution', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -119,8 +119,8 @@ describe('Workflow Loader IT: agent path resolution', () => {
|
|||||||
rmSync(testDir, { recursive: true, force: true });
|
rmSync(testDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should resolve relative agent paths from workflow YAML location', () => {
|
it('should resolve relative agent paths from piece YAML location', () => {
|
||||||
const config = loadWorkflow('minimal', testDir);
|
const config = loadPiece('minimal', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
for (const movement of config!.movements) {
|
for (const movement of config!.movements) {
|
||||||
@ -142,7 +142,7 @@ describe('Workflow Loader IT: agent path resolution', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Workflow Loader IT: rule syntax parsing', () => {
|
describe('Piece Loader IT: rule syntax parsing', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -153,8 +153,8 @@ describe('Workflow Loader IT: rule syntax parsing', () => {
|
|||||||
rmSync(testDir, { recursive: true, force: true });
|
rmSync(testDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should parse all() aggregate conditions from default workflow', () => {
|
it('should parse all() aggregate conditions from default piece', () => {
|
||||||
const config = loadWorkflow('default', testDir);
|
const config = loadPiece('default', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
// Find the parallel reviewers movement
|
// Find the parallel reviewers movement
|
||||||
@ -171,8 +171,8 @@ describe('Workflow Loader IT: rule syntax parsing', () => {
|
|||||||
expect(allRule!.aggregateConditionText).toBe('approved');
|
expect(allRule!.aggregateConditionText).toBe('approved');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should parse any() aggregate conditions from default workflow', () => {
|
it('should parse any() aggregate conditions from default piece', () => {
|
||||||
const config = loadWorkflow('default', testDir);
|
const config = loadPiece('default', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
const reviewersStep = config!.movements.find(
|
const reviewersStep = config!.movements.find(
|
||||||
@ -187,7 +187,7 @@ describe('Workflow Loader IT: rule syntax parsing', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should parse standard rules with next movement', () => {
|
it('should parse standard rules with next movement', () => {
|
||||||
const config = loadWorkflow('minimal', testDir);
|
const config = loadPiece('minimal', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
const implementStep = config!.movements.find((s) => s.name === 'implement');
|
const implementStep = config!.movements.find((s) => s.name === 'implement');
|
||||||
@ -203,7 +203,7 @@ describe('Workflow Loader IT: rule syntax parsing', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Workflow Loader IT: workflow config validation', () => {
|
describe('Piece Loader IT: piece config validation', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -215,14 +215,14 @@ describe('Workflow Loader IT: workflow config validation', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should set max_iterations from YAML', () => {
|
it('should set max_iterations from YAML', () => {
|
||||||
const config = loadWorkflow('minimal', testDir);
|
const config = loadPiece('minimal', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
expect(typeof config!.maxIterations).toBe('number');
|
expect(typeof config!.maxIterations).toBe('number');
|
||||||
expect(config!.maxIterations).toBeGreaterThan(0);
|
expect(config!.maxIterations).toBeGreaterThan(0);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should set initial_movement from YAML', () => {
|
it('should set initial_movement from YAML', () => {
|
||||||
const config = loadWorkflow('minimal', testDir);
|
const config = loadPiece('minimal', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
expect(typeof config!.initialMovement).toBe('string');
|
expect(typeof config!.initialMovement).toBe('string');
|
||||||
|
|
||||||
@ -232,7 +232,7 @@ describe('Workflow Loader IT: workflow config validation', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should preserve edit property on movements (review-only has no edit: true)', () => {
|
it('should preserve edit property on movements (review-only has no edit: true)', () => {
|
||||||
const config = loadWorkflow('review-only', testDir);
|
const config = loadPiece('review-only', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
// review-only: no movement should have edit: true
|
// review-only: no movement should have edit: true
|
||||||
@ -246,7 +246,7 @@ describe('Workflow Loader IT: workflow config validation', () => {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// expert: implement movement should have edit: true
|
// expert: implement movement should have edit: true
|
||||||
const expertConfig = loadWorkflow('expert', testDir);
|
const expertConfig = loadPiece('expert', testDir);
|
||||||
expect(expertConfig).not.toBeNull();
|
expect(expertConfig).not.toBeNull();
|
||||||
const implementStep = expertConfig!.movements.find((s) => s.name === 'implement');
|
const implementStep = expertConfig!.movements.find((s) => s.name === 'implement');
|
||||||
expect(implementStep).toBeDefined();
|
expect(implementStep).toBeDefined();
|
||||||
@ -254,7 +254,7 @@ describe('Workflow Loader IT: workflow config validation', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should set passPreviousResponse from YAML', () => {
|
it('should set passPreviousResponse from YAML', () => {
|
||||||
const config = loadWorkflow('minimal', testDir);
|
const config = loadPiece('minimal', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
// At least some movements should have passPreviousResponse set
|
// At least some movements should have passPreviousResponse set
|
||||||
@ -263,7 +263,7 @@ describe('Workflow Loader IT: workflow config validation', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Workflow Loader IT: parallel movement loading', () => {
|
describe('Piece Loader IT: parallel movement loading', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -274,8 +274,8 @@ describe('Workflow Loader IT: parallel movement loading', () => {
|
|||||||
rmSync(testDir, { recursive: true, force: true });
|
rmSync(testDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should load parallel sub-movements from default workflow', () => {
|
it('should load parallel sub-movements from default piece', () => {
|
||||||
const config = loadWorkflow('default', testDir);
|
const config = loadPiece('default', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
const parallelStep = config!.movements.find(
|
const parallelStep = config!.movements.find(
|
||||||
@ -292,8 +292,8 @@ describe('Workflow Loader IT: parallel movement loading', () => {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should load 4 parallel reviewers from expert workflow', () => {
|
it('should load 4 parallel reviewers from expert piece', () => {
|
||||||
const config = loadWorkflow('expert', testDir);
|
const config = loadPiece('expert', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
const parallelStep = config!.movements.find(
|
const parallelStep = config!.movements.find(
|
||||||
@ -309,7 +309,7 @@ describe('Workflow Loader IT: parallel movement loading', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Workflow Loader IT: report config loading', () => {
|
describe('Piece Loader IT: report config loading', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -321,17 +321,17 @@ describe('Workflow Loader IT: report config loading', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should load single report config', () => {
|
it('should load single report config', () => {
|
||||||
const config = loadWorkflow('default', testDir);
|
const config = loadPiece('default', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
// default workflow: plan movement has a report config
|
// default piece: plan movement has a report config
|
||||||
const planStep = config!.movements.find((s) => s.name === 'plan');
|
const planStep = config!.movements.find((s) => s.name === 'plan');
|
||||||
expect(planStep).toBeDefined();
|
expect(planStep).toBeDefined();
|
||||||
expect(planStep!.report).toBeDefined();
|
expect(planStep!.report).toBeDefined();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should load multi-report config from expert workflow', () => {
|
it('should load multi-report config from expert piece', () => {
|
||||||
const config = loadWorkflow('expert', testDir);
|
const config = loadPiece('expert', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
// implement movement has multi-report: [Scope, Decisions]
|
// implement movement has multi-report: [Scope, Decisions]
|
||||||
@ -343,7 +343,7 @@ describe('Workflow Loader IT: report config loading', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Workflow Loader IT: invalid YAML handling', () => {
|
describe('Piece Loader IT: invalid YAML handling', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -354,28 +354,28 @@ describe('Workflow Loader IT: invalid YAML handling', () => {
|
|||||||
rmSync(testDir, { recursive: true, force: true });
|
rmSync(testDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should throw for workflow file with invalid YAML', () => {
|
it('should throw for piece file with invalid YAML', () => {
|
||||||
const workflowsDir = join(testDir, '.takt', 'workflows');
|
const piecesDir = join(testDir, '.takt', 'pieces');
|
||||||
mkdirSync(workflowsDir, { recursive: true });
|
mkdirSync(piecesDir, { recursive: true });
|
||||||
|
|
||||||
writeFileSync(join(workflowsDir, 'broken.yaml'), `
|
writeFileSync(join(piecesDir, 'broken.yaml'), `
|
||||||
name: broken
|
name: broken
|
||||||
this is not: valid yaml: [[[[
|
this is not: valid yaml: [[[[
|
||||||
- bad: {
|
- bad: {
|
||||||
`);
|
`);
|
||||||
|
|
||||||
expect(() => loadWorkflow('broken', testDir)).toThrow();
|
expect(() => loadPiece('broken', testDir)).toThrow();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should throw for workflow missing required fields', () => {
|
it('should throw for piece missing required fields', () => {
|
||||||
const workflowsDir = join(testDir, '.takt', 'workflows');
|
const piecesDir = join(testDir, '.takt', 'pieces');
|
||||||
mkdirSync(workflowsDir, { recursive: true });
|
mkdirSync(piecesDir, { recursive: true });
|
||||||
|
|
||||||
writeFileSync(join(workflowsDir, 'incomplete.yaml'), `
|
writeFileSync(join(piecesDir, 'incomplete.yaml'), `
|
||||||
name: incomplete
|
name: incomplete
|
||||||
description: Missing movements
|
description: Missing movements
|
||||||
`);
|
`);
|
||||||
|
|
||||||
expect(() => loadWorkflow('incomplete', testDir)).toThrow();
|
expect(() => loadPiece('incomplete', testDir)).toThrow();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
@ -1,11 +1,11 @@
|
|||||||
/**
|
/**
|
||||||
* Workflow patterns integration tests.
|
* Piece patterns integration tests.
|
||||||
*
|
*
|
||||||
* Tests that all builtin workflow definitions can be loaded and execute
|
* Tests that all builtin piece definitions can be loaded and execute
|
||||||
* the expected step transitions using WorkflowEngine + MockProvider + ScenarioQueue.
|
* the expected step transitions using PieceEngine + MockProvider + ScenarioQueue.
|
||||||
*
|
*
|
||||||
* Mocked: UI, session, phase-runner, notifications, config, callAiJudge
|
* Mocked: UI, session, phase-runner, notifications, config, callAiJudge
|
||||||
* Not mocked: WorkflowEngine, runAgent, detectMatchedRule, rule-evaluator
|
* Not mocked: PieceEngine, runAgent, detectMatchedRule, rule-evaluator
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
@ -33,7 +33,7 @@ vi.mock('../infra/claude/client.js', async (importOriginal) => {
|
|||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
vi.mock('../core/workflow/phase-runner.js', () => ({
|
vi.mock('../core/piece/phase-runner.js', () => ({
|
||||||
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
||||||
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
||||||
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
||||||
@ -49,7 +49,7 @@ vi.mock('../infra/config/global/globalConfig.js', () => ({
|
|||||||
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
||||||
getLanguage: vi.fn().mockReturnValue('en'),
|
getLanguage: vi.fn().mockReturnValue('en'),
|
||||||
getDisabledBuiltins: vi.fn().mockReturnValue([]),
|
getDisabledBuiltins: vi.fn().mockReturnValue([]),
|
||||||
getBuiltinWorkflowsEnabled: vi.fn().mockReturnValue(true),
|
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../infra/config/project/projectConfig.js', () => ({
|
vi.mock('../infra/config/project/projectConfig.js', () => ({
|
||||||
@ -58,9 +58,9 @@ vi.mock('../infra/config/project/projectConfig.js', () => ({
|
|||||||
|
|
||||||
// --- Imports (after mocks) ---
|
// --- Imports (after mocks) ---
|
||||||
|
|
||||||
import { WorkflowEngine } from '../core/workflow/index.js';
|
import { PieceEngine } from '../core/piece/index.js';
|
||||||
import { loadWorkflow } from '../infra/config/index.js';
|
import { loadPiece } from '../infra/config/index.js';
|
||||||
import type { WorkflowConfig } from '../core/models/index.js';
|
import type { PieceConfig } from '../core/models/index.js';
|
||||||
|
|
||||||
// --- Test helpers ---
|
// --- Test helpers ---
|
||||||
|
|
||||||
@ -70,8 +70,8 @@ function createTestDir(): string {
|
|||||||
return dir;
|
return dir;
|
||||||
}
|
}
|
||||||
|
|
||||||
function createEngine(config: WorkflowConfig, dir: string, task: string): WorkflowEngine {
|
function createEngine(config: PieceConfig, dir: string, task: string): PieceEngine {
|
||||||
return new WorkflowEngine(config, dir, task, {
|
return new PieceEngine(config, dir, task, {
|
||||||
projectCwd: dir,
|
projectCwd: dir,
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
detectRuleIndex,
|
detectRuleIndex,
|
||||||
@ -79,7 +79,7 @@ function createEngine(config: WorkflowConfig, dir: string, task: string): Workfl
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
describe('Workflow Patterns IT: minimal workflow', () => {
|
describe('Piece Patterns IT: minimal piece', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -93,7 +93,7 @@ describe('Workflow Patterns IT: minimal workflow', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should complete: implement → reviewers (parallel: ai_review + supervise) → COMPLETE', async () => {
|
it('should complete: implement → reviewers (parallel: ai_review + supervise) → COMPLETE', async () => {
|
||||||
const config = loadWorkflow('minimal', testDir);
|
const config = loadPiece('minimal', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
@ -110,7 +110,7 @@ describe('Workflow Patterns IT: minimal workflow', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should ABORT when implement cannot proceed', async () => {
|
it('should ABORT when implement cannot proceed', async () => {
|
||||||
const config = loadWorkflow('minimal', testDir);
|
const config = loadPiece('minimal', testDir);
|
||||||
|
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
{ agent: 'coder', status: 'done', content: 'Cannot proceed, insufficient info.' },
|
{ agent: 'coder', status: 'done', content: 'Cannot proceed, insufficient info.' },
|
||||||
@ -125,7 +125,7 @@ describe('Workflow Patterns IT: minimal workflow', () => {
|
|||||||
|
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Workflow Patterns IT: default workflow (parallel reviewers)', () => {
|
describe('Piece Patterns IT: default piece (parallel reviewers)', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -139,7 +139,7 @@ describe('Workflow Patterns IT: default workflow (parallel reviewers)', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should complete with all("approved") in parallel review step', async () => {
|
it('should complete with all("approved") in parallel review step', async () => {
|
||||||
const config = loadWorkflow('default', testDir);
|
const config = loadPiece('default', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
@ -161,7 +161,7 @@ describe('Workflow Patterns IT: default workflow (parallel reviewers)', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should route to fix when any("needs_fix") in parallel review step', async () => {
|
it('should route to fix when any("needs_fix") in parallel review step', async () => {
|
||||||
const config = loadWorkflow('default', testDir);
|
const config = loadPiece('default', testDir);
|
||||||
|
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
{ agent: 'planner', status: 'done', content: 'Requirements are clear and implementable' },
|
{ agent: 'planner', status: 'done', content: 'Requirements are clear and implementable' },
|
||||||
@ -189,7 +189,7 @@ describe('Workflow Patterns IT: default workflow (parallel reviewers)', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Workflow Patterns IT: research workflow', () => {
|
describe('Piece Patterns IT: research piece', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -203,7 +203,7 @@ describe('Workflow Patterns IT: research workflow', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should complete: plan → dig → supervise → COMPLETE', async () => {
|
it('should complete: plan → dig → supervise → COMPLETE', async () => {
|
||||||
const config = loadWorkflow('research', testDir);
|
const config = loadPiece('research', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
@ -220,7 +220,7 @@ describe('Workflow Patterns IT: research workflow', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should loop: plan → dig → supervise (insufficient) → plan → dig → supervise → COMPLETE', async () => {
|
it('should loop: plan → dig → supervise (insufficient) → plan → dig → supervise → COMPLETE', async () => {
|
||||||
const config = loadWorkflow('research', testDir);
|
const config = loadPiece('research', testDir);
|
||||||
|
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
{ agent: 'research/planner', status: 'done', content: '[PLAN:1]\n\nPlanning is complete.' },
|
{ agent: 'research/planner', status: 'done', content: '[PLAN:1]\n\nPlanning is complete.' },
|
||||||
@ -240,7 +240,7 @@ describe('Workflow Patterns IT: research workflow', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Workflow Patterns IT: magi workflow', () => {
|
describe('Piece Patterns IT: magi piece', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -254,7 +254,7 @@ describe('Workflow Patterns IT: magi workflow', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should complete: melchior → balthasar → casper → COMPLETE', async () => {
|
it('should complete: melchior → balthasar → casper → COMPLETE', async () => {
|
||||||
const config = loadWorkflow('magi', testDir);
|
const config = loadPiece('magi', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
@ -271,7 +271,7 @@ describe('Workflow Patterns IT: magi workflow', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Workflow Patterns IT: review-only workflow', () => {
|
describe('Piece Patterns IT: review-only piece', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -285,7 +285,7 @@ describe('Workflow Patterns IT: review-only workflow', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should complete: plan → reviewers (all approved) → supervise → COMPLETE', async () => {
|
it('should complete: plan → reviewers (all approved) → supervise → COMPLETE', async () => {
|
||||||
const config = loadWorkflow('review-only', testDir);
|
const config = loadPiece('review-only', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
@ -305,7 +305,7 @@ describe('Workflow Patterns IT: review-only workflow', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should verify no movements have edit: true', () => {
|
it('should verify no movements have edit: true', () => {
|
||||||
const config = loadWorkflow('review-only', testDir);
|
const config = loadPiece('review-only', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
for (const movement of config!.movements) {
|
for (const movement of config!.movements) {
|
||||||
@ -319,7 +319,7 @@ describe('Workflow Patterns IT: review-only workflow', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Workflow Patterns IT: expert workflow (4 parallel reviewers)', () => {
|
describe('Piece Patterns IT: expert piece (4 parallel reviewers)', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -333,7 +333,7 @@ describe('Workflow Patterns IT: expert workflow (4 parallel reviewers)', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should complete with all("approved") in 4-parallel review', async () => {
|
it('should complete with all("approved") in 4-parallel review', async () => {
|
||||||
const config = loadWorkflow('expert', testDir);
|
const config = loadPiece('expert', testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
@ -2,11 +2,11 @@
|
|||||||
* Pipeline execution mode integration tests.
|
* Pipeline execution mode integration tests.
|
||||||
*
|
*
|
||||||
* Tests various --pipeline mode option combinations including:
|
* Tests various --pipeline mode option combinations including:
|
||||||
* - --task, --issue, --skip-git, --auto-pr, --workflow (name/path), --provider, --model
|
* - --task, --issue, --skip-git, --auto-pr, --piece (name/path), --provider, --model
|
||||||
* - Exit codes for different failure scenarios
|
* - Exit codes for different failure scenarios
|
||||||
*
|
*
|
||||||
* Mocked: git (child_process), GitHub API, UI, notifications, session, phase-runner, config
|
* Mocked: git (child_process), GitHub API, UI, notifications, session, phase-runner, config
|
||||||
* Not mocked: executePipeline, executeTask, WorkflowEngine, runAgent, rule evaluation
|
* Not mocked: executePipeline, executeTask, PieceEngine, runAgent, rule evaluation
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
@ -109,7 +109,7 @@ vi.mock('../infra/config/paths.js', async (importOriginal) => {
|
|||||||
updateAgentSession: vi.fn(),
|
updateAgentSession: vi.fn(),
|
||||||
loadWorktreeSessions: vi.fn().mockReturnValue({}),
|
loadWorktreeSessions: vi.fn().mockReturnValue({}),
|
||||||
updateWorktreeSession: vi.fn(),
|
updateWorktreeSession: vi.fn(),
|
||||||
getCurrentWorkflow: vi.fn().mockReturnValue('default'),
|
getCurrentPiece: vi.fn().mockReturnValue('default'),
|
||||||
getProjectConfigDir: vi.fn().mockImplementation((cwd: string) => join(cwd, '.takt')),
|
getProjectConfigDir: vi.fn().mockImplementation((cwd: string) => join(cwd, '.takt')),
|
||||||
};
|
};
|
||||||
});
|
});
|
||||||
@ -141,7 +141,7 @@ vi.mock('../shared/prompt/index.js', () => ({
|
|||||||
promptInput: vi.fn().mockResolvedValue(null),
|
promptInput: vi.fn().mockResolvedValue(null),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/phase-runner.js', () => ({
|
vi.mock('../core/piece/phase-runner.js', () => ({
|
||||||
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
||||||
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
||||||
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
||||||
@ -152,13 +152,13 @@ vi.mock('../core/workflow/phase-runner.js', () => ({
|
|||||||
import { executePipeline } from '../features/pipeline/index.js';
|
import { executePipeline } from '../features/pipeline/index.js';
|
||||||
import {
|
import {
|
||||||
EXIT_ISSUE_FETCH_FAILED,
|
EXIT_ISSUE_FETCH_FAILED,
|
||||||
EXIT_WORKFLOW_FAILED,
|
EXIT_PIECE_FAILED,
|
||||||
EXIT_PR_CREATION_FAILED,
|
EXIT_PR_CREATION_FAILED,
|
||||||
} from '../shared/exitCodes.js';
|
} from '../shared/exitCodes.js';
|
||||||
|
|
||||||
// --- Test helpers ---
|
// --- Test helpers ---
|
||||||
|
|
||||||
function createTestWorkflowDir(): { dir: string; workflowPath: string } {
|
function createTestPieceDir(): { dir: string; piecePath: string } {
|
||||||
const dir = mkdtempSync(join(tmpdir(), 'takt-it-pm-'));
|
const dir = mkdtempSync(join(tmpdir(), 'takt-it-pm-'));
|
||||||
mkdirSync(join(dir, '.takt', 'reports', 'test-report-dir'), { recursive: true });
|
mkdirSync(join(dir, '.takt', 'reports', 'test-report-dir'), { recursive: true });
|
||||||
|
|
||||||
@ -168,9 +168,9 @@ function createTestWorkflowDir(): { dir: string; workflowPath: string } {
|
|||||||
writeFileSync(join(agentsDir, 'coder.md'), 'You are a coder.');
|
writeFileSync(join(agentsDir, 'coder.md'), 'You are a coder.');
|
||||||
writeFileSync(join(agentsDir, 'reviewer.md'), 'You are a reviewer.');
|
writeFileSync(join(agentsDir, 'reviewer.md'), 'You are a reviewer.');
|
||||||
|
|
||||||
const workflowYaml = `
|
const pieceYaml = `
|
||||||
name: it-pipeline
|
name: it-pipeline
|
||||||
description: Pipeline test workflow
|
description: Pipeline test piece
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
|
|
||||||
@ -203,10 +203,10 @@ movements:
|
|||||||
instruction: "{task}"
|
instruction: "{task}"
|
||||||
`;
|
`;
|
||||||
|
|
||||||
const workflowPath = join(dir, 'workflow.yaml');
|
const piecePath = join(dir, 'piece.yaml');
|
||||||
writeFileSync(workflowPath, workflowYaml);
|
writeFileSync(piecePath, pieceYaml);
|
||||||
|
|
||||||
return { dir, workflowPath };
|
return { dir, piecePath };
|
||||||
}
|
}
|
||||||
|
|
||||||
function happyScenario(): void {
|
function happyScenario(): void {
|
||||||
@ -217,15 +217,15 @@ function happyScenario(): void {
|
|||||||
]);
|
]);
|
||||||
}
|
}
|
||||||
|
|
||||||
describe('Pipeline Modes IT: --task + --workflow path', () => {
|
describe('Pipeline Modes IT: --task + --piece path', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
let workflowPath: string;
|
let piecePath: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.clearAllMocks();
|
vi.clearAllMocks();
|
||||||
const setup = createTestWorkflowDir();
|
const setup = createTestPieceDir();
|
||||||
testDir = setup.dir;
|
testDir = setup.dir;
|
||||||
workflowPath = setup.workflowPath;
|
piecePath = setup.piecePath;
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
@ -238,7 +238,7 @@ describe('Pipeline Modes IT: --task + --workflow path', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Add a feature',
|
task: 'Add a feature',
|
||||||
workflow: workflowPath,
|
piece: piecePath,
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
@ -248,30 +248,30 @@ describe('Pipeline Modes IT: --task + --workflow path', () => {
|
|||||||
expect(exitCode).toBe(0);
|
expect(exitCode).toBe(0);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return EXIT_WORKFLOW_FAILED (3) on ABORT', async () => {
|
it('should return EXIT_PIECE_FAILED (3) on ABORT', async () => {
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
{ agent: 'planner', status: 'done', content: '[PLAN:2]\n\nRequirements unclear.' },
|
{ agent: 'planner', status: 'done', content: '[PLAN:2]\n\nRequirements unclear.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Vague task',
|
task: 'Vague task',
|
||||||
workflow: workflowPath,
|
piece: piecePath,
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
|
|
||||||
expect(exitCode).toBe(EXIT_WORKFLOW_FAILED);
|
expect(exitCode).toBe(EXIT_PIECE_FAILED);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Pipeline Modes IT: --task + --workflow name (builtin)', () => {
|
describe('Pipeline Modes IT: --task + --piece name (builtin)', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.clearAllMocks();
|
vi.clearAllMocks();
|
||||||
const setup = createTestWorkflowDir();
|
const setup = createTestPieceDir();
|
||||||
testDir = setup.dir;
|
testDir = setup.dir;
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -280,7 +280,7 @@ describe('Pipeline Modes IT: --task + --workflow name (builtin)', () => {
|
|||||||
rmSync(testDir, { recursive: true, force: true });
|
rmSync(testDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should load and execute builtin minimal workflow by name', async () => {
|
it('should load and execute builtin minimal piece by name', async () => {
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
{ agent: 'coder', status: 'done', content: 'Implementation complete' },
|
{ agent: 'coder', status: 'done', content: 'Implementation complete' },
|
||||||
{ agent: 'ai-antipattern-reviewer', status: 'done', content: 'No AI-specific issues' },
|
{ agent: 'ai-antipattern-reviewer', status: 'done', content: 'No AI-specific issues' },
|
||||||
@ -289,7 +289,7 @@ describe('Pipeline Modes IT: --task + --workflow name (builtin)', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Add a feature',
|
task: 'Add a feature',
|
||||||
workflow: 'minimal',
|
piece: 'minimal',
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
@ -299,29 +299,29 @@ describe('Pipeline Modes IT: --task + --workflow name (builtin)', () => {
|
|||||||
expect(exitCode).toBe(0);
|
expect(exitCode).toBe(0);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return EXIT_WORKFLOW_FAILED for non-existent workflow name', async () => {
|
it('should return EXIT_PIECE_FAILED for non-existent piece name', async () => {
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Test task',
|
task: 'Test task',
|
||||||
workflow: 'non-existent-workflow-xyz',
|
piece: 'non-existent-piece-xyz',
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
|
|
||||||
expect(exitCode).toBe(EXIT_WORKFLOW_FAILED);
|
expect(exitCode).toBe(EXIT_PIECE_FAILED);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Pipeline Modes IT: --issue', () => {
|
describe('Pipeline Modes IT: --issue', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
let workflowPath: string;
|
let piecePath: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.clearAllMocks();
|
vi.clearAllMocks();
|
||||||
const setup = createTestWorkflowDir();
|
const setup = createTestPieceDir();
|
||||||
testDir = setup.dir;
|
testDir = setup.dir;
|
||||||
workflowPath = setup.workflowPath;
|
piecePath = setup.piecePath;
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
@ -329,7 +329,7 @@ describe('Pipeline Modes IT: --issue', () => {
|
|||||||
rmSync(testDir, { recursive: true, force: true });
|
rmSync(testDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should fetch issue and execute workflow', async () => {
|
it('should fetch issue and execute piece', async () => {
|
||||||
mockCheckGhCli.mockReturnValue({ available: true });
|
mockCheckGhCli.mockReturnValue({ available: true });
|
||||||
mockFetchIssue.mockReturnValue({
|
mockFetchIssue.mockReturnValue({
|
||||||
number: 42,
|
number: 42,
|
||||||
@ -341,7 +341,7 @@ describe('Pipeline Modes IT: --issue', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
issueNumber: 42,
|
issueNumber: 42,
|
||||||
workflow: workflowPath,
|
piece: piecePath,
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
@ -357,7 +357,7 @@ describe('Pipeline Modes IT: --issue', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
issueNumber: 42,
|
issueNumber: 42,
|
||||||
workflow: workflowPath,
|
piece: piecePath,
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
@ -375,7 +375,7 @@ describe('Pipeline Modes IT: --issue', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
issueNumber: 999,
|
issueNumber: 999,
|
||||||
workflow: workflowPath,
|
piece: piecePath,
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
@ -387,7 +387,7 @@ describe('Pipeline Modes IT: --issue', () => {
|
|||||||
|
|
||||||
it('should return EXIT_ISSUE_FETCH_FAILED when neither --issue nor --task specified', async () => {
|
it('should return EXIT_ISSUE_FETCH_FAILED when neither --issue nor --task specified', async () => {
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
workflow: workflowPath,
|
piece: piecePath,
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
@ -400,13 +400,13 @@ describe('Pipeline Modes IT: --issue', () => {
|
|||||||
|
|
||||||
describe('Pipeline Modes IT: --auto-pr', () => {
|
describe('Pipeline Modes IT: --auto-pr', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
let workflowPath: string;
|
let piecePath: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.clearAllMocks();
|
vi.clearAllMocks();
|
||||||
const setup = createTestWorkflowDir();
|
const setup = createTestPieceDir();
|
||||||
testDir = setup.dir;
|
testDir = setup.dir;
|
||||||
workflowPath = setup.workflowPath;
|
piecePath = setup.piecePath;
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
@ -420,7 +420,7 @@ describe('Pipeline Modes IT: --auto-pr', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Add a feature',
|
task: 'Add a feature',
|
||||||
workflow: workflowPath,
|
piece: piecePath,
|
||||||
autoPr: true,
|
autoPr: true,
|
||||||
skipGit: false,
|
skipGit: false,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
@ -437,7 +437,7 @@ describe('Pipeline Modes IT: --auto-pr', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Add a feature',
|
task: 'Add a feature',
|
||||||
workflow: workflowPath,
|
piece: piecePath,
|
||||||
autoPr: true,
|
autoPr: true,
|
||||||
skipGit: false,
|
skipGit: false,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
@ -452,7 +452,7 @@ describe('Pipeline Modes IT: --auto-pr', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Add a feature',
|
task: 'Add a feature',
|
||||||
workflow: workflowPath,
|
piece: piecePath,
|
||||||
autoPr: true,
|
autoPr: true,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
@ -466,13 +466,13 @@ describe('Pipeline Modes IT: --auto-pr', () => {
|
|||||||
|
|
||||||
describe('Pipeline Modes IT: --provider and --model overrides', () => {
|
describe('Pipeline Modes IT: --provider and --model overrides', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
let workflowPath: string;
|
let piecePath: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.clearAllMocks();
|
vi.clearAllMocks();
|
||||||
const setup = createTestWorkflowDir();
|
const setup = createTestPieceDir();
|
||||||
testDir = setup.dir;
|
testDir = setup.dir;
|
||||||
workflowPath = setup.workflowPath;
|
piecePath = setup.piecePath;
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
@ -480,12 +480,12 @@ describe('Pipeline Modes IT: --provider and --model overrides', () => {
|
|||||||
rmSync(testDir, { recursive: true, force: true });
|
rmSync(testDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should pass provider override to workflow execution', async () => {
|
it('should pass provider override to piece execution', async () => {
|
||||||
happyScenario();
|
happyScenario();
|
||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Test task',
|
task: 'Test task',
|
||||||
workflow: workflowPath,
|
piece: piecePath,
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
@ -495,12 +495,12 @@ describe('Pipeline Modes IT: --provider and --model overrides', () => {
|
|||||||
expect(exitCode).toBe(0);
|
expect(exitCode).toBe(0);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should pass model override to workflow execution', async () => {
|
it('should pass model override to piece execution', async () => {
|
||||||
happyScenario();
|
happyScenario();
|
||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Test task',
|
task: 'Test task',
|
||||||
workflow: workflowPath,
|
piece: piecePath,
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
@ -514,13 +514,13 @@ describe('Pipeline Modes IT: --provider and --model overrides', () => {
|
|||||||
|
|
||||||
describe('Pipeline Modes IT: review → fix loop', () => {
|
describe('Pipeline Modes IT: review → fix loop', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
let workflowPath: string;
|
let piecePath: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.clearAllMocks();
|
vi.clearAllMocks();
|
||||||
const setup = createTestWorkflowDir();
|
const setup = createTestPieceDir();
|
||||||
testDir = setup.dir;
|
testDir = setup.dir;
|
||||||
workflowPath = setup.workflowPath;
|
piecePath = setup.piecePath;
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
@ -542,7 +542,7 @@ describe('Pipeline Modes IT: review → fix loop', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Task with fix loop',
|
task: 'Task with fix loop',
|
||||||
workflow: workflowPath,
|
piece: piecePath,
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
|
|||||||
@ -5,7 +5,7 @@
|
|||||||
* of the pipeline execution flow. Git operations are skipped via --skip-git.
|
* of the pipeline execution flow. Git operations are skipped via --skip-git.
|
||||||
*
|
*
|
||||||
* Mocked: git operations (child_process), GitHub API, UI output, notifications, session
|
* Mocked: git operations (child_process), GitHub API, UI output, notifications, session
|
||||||
* Not mocked: executeTask, executeWorkflow, WorkflowEngine, runAgent, rule evaluation
|
* Not mocked: executeTask, executePiece, PieceEngine, runAgent, rule evaluation
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
@ -92,7 +92,7 @@ vi.mock('../infra/config/paths.js', async (importOriginal) => {
|
|||||||
updateAgentSession: vi.fn(),
|
updateAgentSession: vi.fn(),
|
||||||
loadWorktreeSessions: vi.fn().mockReturnValue({}),
|
loadWorktreeSessions: vi.fn().mockReturnValue({}),
|
||||||
updateWorktreeSession: vi.fn(),
|
updateWorktreeSession: vi.fn(),
|
||||||
getCurrentWorkflow: vi.fn().mockReturnValue('default'),
|
getCurrentPiece: vi.fn().mockReturnValue('default'),
|
||||||
getProjectConfigDir: vi.fn().mockImplementation((cwd: string) => join(cwd, '.takt')),
|
getProjectConfigDir: vi.fn().mockImplementation((cwd: string) => join(cwd, '.takt')),
|
||||||
};
|
};
|
||||||
});
|
});
|
||||||
@ -123,7 +123,7 @@ vi.mock('../shared/prompt/index.js', () => ({
|
|||||||
promptInput: vi.fn().mockResolvedValue(null),
|
promptInput: vi.fn().mockResolvedValue(null),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../core/workflow/phase-runner.js', () => ({
|
vi.mock('../core/piece/phase-runner.js', () => ({
|
||||||
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
||||||
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
||||||
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
||||||
@ -135,8 +135,8 @@ import { executePipeline } from '../features/pipeline/index.js';
|
|||||||
|
|
||||||
// --- Test helpers ---
|
// --- Test helpers ---
|
||||||
|
|
||||||
/** Create a minimal test workflow YAML + agent files in a temp directory */
|
/** Create a minimal test piece YAML + agent files in a temp directory */
|
||||||
function createTestWorkflowDir(): { dir: string; workflowPath: string } {
|
function createTestPieceDir(): { dir: string; piecePath: string } {
|
||||||
const dir = mkdtempSync(join(tmpdir(), 'takt-it-pipeline-'));
|
const dir = mkdtempSync(join(tmpdir(), 'takt-it-pipeline-'));
|
||||||
|
|
||||||
// Create .takt/reports structure
|
// Create .takt/reports structure
|
||||||
@ -149,10 +149,10 @@ function createTestWorkflowDir(): { dir: string; workflowPath: string } {
|
|||||||
writeFileSync(join(agentsDir, 'coder.md'), 'You are a coder. Implement the task.');
|
writeFileSync(join(agentsDir, 'coder.md'), 'You are a coder. Implement the task.');
|
||||||
writeFileSync(join(agentsDir, 'reviewer.md'), 'You are a reviewer. Review the code.');
|
writeFileSync(join(agentsDir, 'reviewer.md'), 'You are a reviewer. Review the code.');
|
||||||
|
|
||||||
// Create a simple workflow YAML
|
// Create a simple piece YAML
|
||||||
const workflowYaml = `
|
const pieceYaml = `
|
||||||
name: it-simple
|
name: it-simple
|
||||||
description: Integration test workflow
|
description: Integration test piece
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
|
|
||||||
@ -185,21 +185,21 @@ movements:
|
|||||||
instruction: "{task}"
|
instruction: "{task}"
|
||||||
`;
|
`;
|
||||||
|
|
||||||
const workflowPath = join(dir, 'workflow.yaml');
|
const piecePath = join(dir, 'piece.yaml');
|
||||||
writeFileSync(workflowPath, workflowYaml);
|
writeFileSync(piecePath, pieceYaml);
|
||||||
|
|
||||||
return { dir, workflowPath };
|
return { dir, piecePath };
|
||||||
}
|
}
|
||||||
|
|
||||||
describe('Pipeline Integration Tests', () => {
|
describe('Pipeline Integration Tests', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
let workflowPath: string;
|
let piecePath: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.clearAllMocks();
|
vi.clearAllMocks();
|
||||||
const setup = createTestWorkflowDir();
|
const setup = createTestPieceDir();
|
||||||
testDir = setup.dir;
|
testDir = setup.dir;
|
||||||
workflowPath = setup.workflowPath;
|
piecePath = setup.piecePath;
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
@ -207,7 +207,7 @@ describe('Pipeline Integration Tests', () => {
|
|||||||
rmSync(testDir, { recursive: true, force: true });
|
rmSync(testDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should complete pipeline with workflow path + skip-git + mock scenario', async () => {
|
it('should complete pipeline with piece path + skip-git + mock scenario', async () => {
|
||||||
// Scenario: plan -> implement -> review -> COMPLETE
|
// Scenario: plan -> implement -> review -> COMPLETE
|
||||||
// agent field must match extractAgentName(movement.agent), i.e., the .md filename without extension
|
// agent field must match extractAgentName(movement.agent), i.e., the .md filename without extension
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
@ -218,7 +218,7 @@ describe('Pipeline Integration Tests', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Add a hello world function',
|
task: 'Add a hello world function',
|
||||||
workflow: workflowPath,
|
piece: piecePath,
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
@ -228,8 +228,8 @@ describe('Pipeline Integration Tests', () => {
|
|||||||
expect(exitCode).toBe(0);
|
expect(exitCode).toBe(0);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should complete pipeline with workflow name + skip-git + mock scenario', async () => {
|
it('should complete pipeline with piece name + skip-git + mock scenario', async () => {
|
||||||
// Use builtin 'minimal' workflow
|
// Use builtin 'minimal' piece
|
||||||
// agent field: extractAgentName result (from .md filename)
|
// agent field: extractAgentName result (from .md filename)
|
||||||
// tag in content: [MOVEMENT_NAME:N] where MOVEMENT_NAME is the movement name uppercased
|
// tag in content: [MOVEMENT_NAME:N] where MOVEMENT_NAME is the movement name uppercased
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
@ -240,7 +240,7 @@ describe('Pipeline Integration Tests', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Add a hello world function',
|
task: 'Add a hello world function',
|
||||||
workflow: 'minimal',
|
piece: 'minimal',
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
@ -250,21 +250,21 @@ describe('Pipeline Integration Tests', () => {
|
|||||||
expect(exitCode).toBe(0);
|
expect(exitCode).toBe(0);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return EXIT_WORKFLOW_FAILED for non-existent workflow', async () => {
|
it('should return EXIT_PIECE_FAILED for non-existent piece', async () => {
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Test task',
|
task: 'Test task',
|
||||||
workflow: 'non-existent-workflow-xyz',
|
piece: 'non-existent-piece-xyz',
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
|
|
||||||
// executeTask returns false when workflow not found → executePipeline returns EXIT_WORKFLOW_FAILED (3)
|
// executeTask returns false when piece not found → executePipeline returns EXIT_PIECE_FAILED (3)
|
||||||
expect(exitCode).toBe(3);
|
expect(exitCode).toBe(3);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should handle ABORT transition from workflow', async () => {
|
it('should handle ABORT transition from piece', async () => {
|
||||||
// Scenario: plan returns second rule -> ABORT
|
// Scenario: plan returns second rule -> ABORT
|
||||||
setMockScenario([
|
setMockScenario([
|
||||||
{ agent: 'planner', status: 'done', content: '[PLAN:2]\n\nRequirements unclear, insufficient info.' },
|
{ agent: 'planner', status: 'done', content: '[PLAN:2]\n\nRequirements unclear, insufficient info.' },
|
||||||
@ -272,14 +272,14 @@ describe('Pipeline Integration Tests', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Vague task with no details',
|
task: 'Vague task with no details',
|
||||||
workflow: workflowPath,
|
piece: piecePath,
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
|
|
||||||
// ABORT means workflow failed -> EXIT_WORKFLOW_FAILED (3)
|
// ABORT means piece failed -> EXIT_PIECE_FAILED (3)
|
||||||
expect(exitCode).toBe(3);
|
expect(exitCode).toBe(3);
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -296,7 +296,7 @@ describe('Pipeline Integration Tests', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Task needing a fix',
|
task: 'Task needing a fix',
|
||||||
workflow: workflowPath,
|
piece: piecePath,
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: testDir,
|
cwd: testDir,
|
||||||
|
|||||||
@ -15,7 +15,7 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, vi } from 'vitest';
|
import { describe, it, expect, beforeEach, vi } from 'vitest';
|
||||||
import type { WorkflowMovement, WorkflowState, WorkflowRule, AgentResponse } from '../core/models/index.js';
|
import type { PieceMovement, PieceState, PieceRule, AgentResponse } from '../core/models/index.js';
|
||||||
|
|
||||||
// --- Mocks ---
|
// --- Mocks ---
|
||||||
|
|
||||||
@ -24,7 +24,7 @@ const mockCallAiJudge = vi.fn();
|
|||||||
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
||||||
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
||||||
getLanguage: vi.fn().mockReturnValue('en'),
|
getLanguage: vi.fn().mockReturnValue('en'),
|
||||||
getBuiltinWorkflowsEnabled: vi.fn().mockReturnValue(true),
|
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../infra/config/project/projectConfig.js', () => ({
|
vi.mock('../infra/config/project/projectConfig.js', () => ({
|
||||||
@ -33,21 +33,21 @@ vi.mock('../infra/config/project/projectConfig.js', () => ({
|
|||||||
|
|
||||||
// --- Imports (after mocks) ---
|
// --- Imports (after mocks) ---
|
||||||
|
|
||||||
import { detectMatchedRule, evaluateAggregateConditions } from '../core/workflow/index.js';
|
import { detectMatchedRule, evaluateAggregateConditions } from '../core/piece/index.js';
|
||||||
import { detectRuleIndex } from '../infra/claude/index.js';
|
import { detectRuleIndex } from '../infra/claude/index.js';
|
||||||
import type { RuleMatch, RuleEvaluatorContext } from '../core/workflow/index.js';
|
import type { RuleMatch, RuleEvaluatorContext } from '../core/piece/index.js';
|
||||||
|
|
||||||
// --- Test helpers ---
|
// --- Test helpers ---
|
||||||
|
|
||||||
function makeRule(condition: string, next: string, extra?: Partial<WorkflowRule>): WorkflowRule {
|
function makeRule(condition: string, next: string, extra?: Partial<PieceRule>): PieceRule {
|
||||||
return { condition, next, ...extra };
|
return { condition, next, ...extra };
|
||||||
}
|
}
|
||||||
|
|
||||||
function makeMovement(
|
function makeMovement(
|
||||||
name: string,
|
name: string,
|
||||||
rules: WorkflowRule[],
|
rules: PieceRule[],
|
||||||
parallel?: WorkflowMovement[],
|
parallel?: PieceMovement[],
|
||||||
): WorkflowMovement {
|
): PieceMovement {
|
||||||
return {
|
return {
|
||||||
name,
|
name,
|
||||||
agent: 'test-agent',
|
agent: 'test-agent',
|
||||||
@ -59,9 +59,9 @@ function makeMovement(
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
function makeState(movementOutputs?: Map<string, AgentResponse>): WorkflowState {
|
function makeState(movementOutputs?: Map<string, AgentResponse>): PieceState {
|
||||||
return {
|
return {
|
||||||
workflowName: 'it-test',
|
pieceName: 'it-test',
|
||||||
currentMovement: 'test',
|
currentMovement: 'test',
|
||||||
iteration: 1,
|
iteration: 1,
|
||||||
status: 'running',
|
status: 'running',
|
||||||
@ -399,7 +399,7 @@ describe('Rule Evaluation IT: movements without rules', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should return undefined for movement with no rules', async () => {
|
it('should return undefined for movement with no rules', async () => {
|
||||||
const step: WorkflowMovement = {
|
const step: PieceMovement = {
|
||||||
name: 'step',
|
name: 'step',
|
||||||
agent: 'agent',
|
agent: 'agent',
|
||||||
agentDisplayName: 'step',
|
agentDisplayName: 'step',
|
||||||
|
|||||||
@ -6,7 +6,7 @@
|
|||||||
*
|
*
|
||||||
* Mocked: UI, session, config, callAiJudge
|
* Mocked: UI, session, config, callAiJudge
|
||||||
* Selectively mocked: phase-runner (to inspect call patterns)
|
* Selectively mocked: phase-runner (to inspect call patterns)
|
||||||
* Not mocked: WorkflowEngine, runAgent, detectMatchedRule, rule-evaluator
|
* Not mocked: PieceEngine, runAgent, detectMatchedRule, rule-evaluator
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
@ -14,7 +14,7 @@ import { mkdtempSync, mkdirSync, writeFileSync, rmSync } from 'node:fs';
|
|||||||
import { join } from 'node:path';
|
import { join } from 'node:path';
|
||||||
import { tmpdir } from 'node:os';
|
import { tmpdir } from 'node:os';
|
||||||
import { setMockScenario, resetScenario } from '../infra/mock/index.js';
|
import { setMockScenario, resetScenario } from '../infra/mock/index.js';
|
||||||
import type { WorkflowConfig, WorkflowMovement, WorkflowRule } from '../core/models/index.js';
|
import type { PieceConfig, PieceMovement, PieceRule } from '../core/models/index.js';
|
||||||
import { callAiJudge, detectRuleIndex } from '../infra/claude/index.js';
|
import { callAiJudge, detectRuleIndex } from '../infra/claude/index.js';
|
||||||
|
|
||||||
// --- Mocks ---
|
// --- Mocks ---
|
||||||
@ -31,7 +31,7 @@ const mockNeedsStatusJudgmentPhase = vi.fn();
|
|||||||
const mockRunReportPhase = vi.fn();
|
const mockRunReportPhase = vi.fn();
|
||||||
const mockRunStatusJudgmentPhase = vi.fn();
|
const mockRunStatusJudgmentPhase = vi.fn();
|
||||||
|
|
||||||
vi.mock('../core/workflow/phase-runner.js', () => ({
|
vi.mock('../core/piece/phase-runner.js', () => ({
|
||||||
needsStatusJudgmentPhase: (...args: unknown[]) => mockNeedsStatusJudgmentPhase(...args),
|
needsStatusJudgmentPhase: (...args: unknown[]) => mockNeedsStatusJudgmentPhase(...args),
|
||||||
runReportPhase: (...args: unknown[]) => mockRunReportPhase(...args),
|
runReportPhase: (...args: unknown[]) => mockRunReportPhase(...args),
|
||||||
runStatusJudgmentPhase: (...args: unknown[]) => mockRunStatusJudgmentPhase(...args),
|
runStatusJudgmentPhase: (...args: unknown[]) => mockRunStatusJudgmentPhase(...args),
|
||||||
@ -47,7 +47,7 @@ vi.mock('../infra/config/global/globalConfig.js', () => ({
|
|||||||
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
||||||
getLanguage: vi.fn().mockReturnValue('en'),
|
getLanguage: vi.fn().mockReturnValue('en'),
|
||||||
getDisabledBuiltins: vi.fn().mockReturnValue([]),
|
getDisabledBuiltins: vi.fn().mockReturnValue([]),
|
||||||
getBuiltinWorkflowsEnabled: vi.fn().mockReturnValue(true),
|
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../infra/config/project/projectConfig.js', () => ({
|
vi.mock('../infra/config/project/projectConfig.js', () => ({
|
||||||
@ -56,11 +56,11 @@ vi.mock('../infra/config/project/projectConfig.js', () => ({
|
|||||||
|
|
||||||
// --- Imports (after mocks) ---
|
// --- Imports (after mocks) ---
|
||||||
|
|
||||||
import { WorkflowEngine } from '../core/workflow/index.js';
|
import { PieceEngine } from '../core/piece/index.js';
|
||||||
|
|
||||||
// --- Test helpers ---
|
// --- Test helpers ---
|
||||||
|
|
||||||
function makeRule(condition: string, next: string): WorkflowRule {
|
function makeRule(condition: string, next: string): PieceRule {
|
||||||
return { condition, next };
|
return { condition, next };
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -87,9 +87,9 @@ function buildEngineOptions(projectCwd: string) {
|
|||||||
function makeMovement(
|
function makeMovement(
|
||||||
name: string,
|
name: string,
|
||||||
agentPath: string,
|
agentPath: string,
|
||||||
rules: WorkflowRule[],
|
rules: PieceRule[],
|
||||||
options: { report?: string | { label: string; path: string }[]; edit?: boolean } = {},
|
options: { report?: string | { label: string; path: string }[]; edit?: boolean } = {},
|
||||||
): WorkflowMovement {
|
): PieceMovement {
|
||||||
return {
|
return {
|
||||||
name,
|
name,
|
||||||
agent: './agents/agent.md',
|
agent: './agents/agent.md',
|
||||||
@ -129,7 +129,7 @@ describe('Three-Phase Execution IT: phase1 only (no report, no tag rules)', () =
|
|||||||
{ status: 'done', content: '[STEP:1]\n\nDone.' },
|
{ status: 'done', content: '[STEP:1]\n\nDone.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config: WorkflowConfig = {
|
const config: PieceConfig = {
|
||||||
name: 'it-phase1-only',
|
name: 'it-phase1-only',
|
||||||
description: 'Test',
|
description: 'Test',
|
||||||
maxIterations: 5,
|
maxIterations: 5,
|
||||||
@ -142,7 +142,7 @@ describe('Three-Phase Execution IT: phase1 only (no report, no tag rules)', () =
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Test task', {
|
const engine = new PieceEngine(config, testDir, 'Test task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -181,7 +181,7 @@ describe('Three-Phase Execution IT: phase1 + phase2 (report defined)', () => {
|
|||||||
{ status: 'done', content: '[STEP:1]\n\nDone.' },
|
{ status: 'done', content: '[STEP:1]\n\nDone.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config: WorkflowConfig = {
|
const config: PieceConfig = {
|
||||||
name: 'it-phase1-2',
|
name: 'it-phase1-2',
|
||||||
description: 'Test',
|
description: 'Test',
|
||||||
maxIterations: 5,
|
maxIterations: 5,
|
||||||
@ -194,7 +194,7 @@ describe('Three-Phase Execution IT: phase1 + phase2 (report defined)', () => {
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Test task', {
|
const engine = new PieceEngine(config, testDir, 'Test task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -211,7 +211,7 @@ describe('Three-Phase Execution IT: phase1 + phase2 (report defined)', () => {
|
|||||||
{ status: 'done', content: '[STEP:1]\n\nDone.' },
|
{ status: 'done', content: '[STEP:1]\n\nDone.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config: WorkflowConfig = {
|
const config: PieceConfig = {
|
||||||
name: 'it-phase1-2-multi',
|
name: 'it-phase1-2-multi',
|
||||||
description: 'Test',
|
description: 'Test',
|
||||||
maxIterations: 5,
|
maxIterations: 5,
|
||||||
@ -223,7 +223,7 @@ describe('Three-Phase Execution IT: phase1 + phase2 (report defined)', () => {
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Test task', {
|
const engine = new PieceEngine(config, testDir, 'Test task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -262,7 +262,7 @@ describe('Three-Phase Execution IT: phase1 + phase3 (tag rules defined)', () =>
|
|||||||
{ status: 'done', content: 'Agent completed the work.' },
|
{ status: 'done', content: 'Agent completed the work.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config: WorkflowConfig = {
|
const config: PieceConfig = {
|
||||||
name: 'it-phase1-3',
|
name: 'it-phase1-3',
|
||||||
description: 'Test',
|
description: 'Test',
|
||||||
maxIterations: 5,
|
maxIterations: 5,
|
||||||
@ -275,7 +275,7 @@ describe('Three-Phase Execution IT: phase1 + phase3 (tag rules defined)', () =>
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Test task', {
|
const engine = new PieceEngine(config, testDir, 'Test task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -313,7 +313,7 @@ describe('Three-Phase Execution IT: all three phases', () => {
|
|||||||
{ status: 'done', content: 'Agent completed the work.' },
|
{ status: 'done', content: 'Agent completed the work.' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const config: WorkflowConfig = {
|
const config: PieceConfig = {
|
||||||
name: 'it-all-phases',
|
name: 'it-all-phases',
|
||||||
description: 'Test',
|
description: 'Test',
|
||||||
maxIterations: 5,
|
maxIterations: 5,
|
||||||
@ -326,7 +326,7 @@ describe('Three-Phase Execution IT: all three phases', () => {
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Test task', {
|
const engine = new PieceEngine(config, testDir, 'Test task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
@ -373,7 +373,7 @@ describe('Three-Phase Execution IT: phase3 tag → rule match', () => {
|
|||||||
// Phase 3 returns rule 2 (ABORT)
|
// Phase 3 returns rule 2 (ABORT)
|
||||||
mockRunStatusJudgmentPhase.mockResolvedValue('[STEP1:2]');
|
mockRunStatusJudgmentPhase.mockResolvedValue('[STEP1:2]');
|
||||||
|
|
||||||
const config: WorkflowConfig = {
|
const config: PieceConfig = {
|
||||||
name: 'it-phase3-tag',
|
name: 'it-phase3-tag',
|
||||||
description: 'Test',
|
description: 'Test',
|
||||||
maxIterations: 5,
|
maxIterations: 5,
|
||||||
@ -389,7 +389,7 @@ describe('Three-Phase Execution IT: phase3 tag → rule match', () => {
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
const engine = new WorkflowEngine(config, testDir, 'Test task', {
|
const engine = new PieceEngine(config, testDir, 'Test task', {
|
||||||
...buildEngineOptions(testDir),
|
...buildEngineOptions(testDir),
|
||||||
provider: 'mock',
|
provider: 'mock',
|
||||||
});
|
});
|
||||||
|
|||||||
@ -7,7 +7,7 @@ import {
|
|||||||
AgentTypeSchema,
|
AgentTypeSchema,
|
||||||
StatusSchema,
|
StatusSchema,
|
||||||
PermissionModeSchema,
|
PermissionModeSchema,
|
||||||
WorkflowConfigRawSchema,
|
PieceConfigRawSchema,
|
||||||
CustomAgentConfigSchema,
|
CustomAgentConfigSchema,
|
||||||
GlobalConfigSchema,
|
GlobalConfigSchema,
|
||||||
} from '../core/models/index.js';
|
} from '../core/models/index.js';
|
||||||
@ -57,11 +57,11 @@ describe('PermissionModeSchema', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('WorkflowConfigRawSchema', () => {
|
describe('PieceConfigRawSchema', () => {
|
||||||
it('should parse valid workflow config', () => {
|
it('should parse valid piece config', () => {
|
||||||
const config = {
|
const config = {
|
||||||
name: 'test-workflow',
|
name: 'test-piece',
|
||||||
description: 'A test workflow',
|
description: 'A test piece',
|
||||||
movements: [
|
movements: [
|
||||||
{
|
{
|
||||||
name: 'step1',
|
name: 'step1',
|
||||||
@ -75,8 +75,8 @@ describe('WorkflowConfigRawSchema', () => {
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = WorkflowConfigRawSchema.parse(config);
|
const result = PieceConfigRawSchema.parse(config);
|
||||||
expect(result.name).toBe('test-workflow');
|
expect(result.name).toBe('test-piece');
|
||||||
expect(result.movements).toHaveLength(1);
|
expect(result.movements).toHaveLength(1);
|
||||||
expect(result.movements![0]?.allowed_tools).toEqual(['Read', 'Grep']);
|
expect(result.movements![0]?.allowed_tools).toEqual(['Read', 'Grep']);
|
||||||
expect(result.max_iterations).toBe(10);
|
expect(result.max_iterations).toBe(10);
|
||||||
@ -84,7 +84,7 @@ describe('WorkflowConfigRawSchema', () => {
|
|||||||
|
|
||||||
it('should parse movement with permission_mode', () => {
|
it('should parse movement with permission_mode', () => {
|
||||||
const config = {
|
const config = {
|
||||||
name: 'test-workflow',
|
name: 'test-piece',
|
||||||
movements: [
|
movements: [
|
||||||
{
|
{
|
||||||
name: 'implement',
|
name: 'implement',
|
||||||
@ -99,13 +99,13 @@ describe('WorkflowConfigRawSchema', () => {
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = WorkflowConfigRawSchema.parse(config);
|
const result = PieceConfigRawSchema.parse(config);
|
||||||
expect(result.movements![0]?.permission_mode).toBe('edit');
|
expect(result.movements![0]?.permission_mode).toBe('edit');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should allow omitting permission_mode', () => {
|
it('should allow omitting permission_mode', () => {
|
||||||
const config = {
|
const config = {
|
||||||
name: 'test-workflow',
|
name: 'test-piece',
|
||||||
movements: [
|
movements: [
|
||||||
{
|
{
|
||||||
name: 'plan',
|
name: 'plan',
|
||||||
@ -115,13 +115,13 @@ describe('WorkflowConfigRawSchema', () => {
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = WorkflowConfigRawSchema.parse(config);
|
const result = PieceConfigRawSchema.parse(config);
|
||||||
expect(result.movements![0]?.permission_mode).toBeUndefined();
|
expect(result.movements![0]?.permission_mode).toBeUndefined();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should reject invalid permission_mode', () => {
|
it('should reject invalid permission_mode', () => {
|
||||||
const config = {
|
const config = {
|
||||||
name: 'test-workflow',
|
name: 'test-piece',
|
||||||
movements: [
|
movements: [
|
||||||
{
|
{
|
||||||
name: 'step1',
|
name: 'step1',
|
||||||
@ -132,16 +132,16 @@ describe('WorkflowConfigRawSchema', () => {
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
expect(() => WorkflowConfigRawSchema.parse(config)).toThrow();
|
expect(() => PieceConfigRawSchema.parse(config)).toThrow();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should require at least one movement', () => {
|
it('should require at least one movement', () => {
|
||||||
const config = {
|
const config = {
|
||||||
name: 'empty-workflow',
|
name: 'empty-piece',
|
||||||
movements: [],
|
movements: [],
|
||||||
};
|
};
|
||||||
|
|
||||||
expect(() => WorkflowConfigRawSchema.parse(config)).toThrow();
|
expect(() => PieceConfigRawSchema.parse(config)).toThrow();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -202,7 +202,7 @@ describe('GlobalConfigSchema', () => {
|
|||||||
const result = GlobalConfigSchema.parse(config);
|
const result = GlobalConfigSchema.parse(config);
|
||||||
|
|
||||||
expect(result.trusted_directories).toEqual([]);
|
expect(result.trusted_directories).toEqual([]);
|
||||||
expect(result.default_workflow).toBe('default');
|
expect(result.default_piece).toBe('default');
|
||||||
expect(result.log_level).toBe('info');
|
expect(result.log_level).toBe('info');
|
||||||
expect(result.provider).toBe('claude');
|
expect(result.provider).toBe('claude');
|
||||||
});
|
});
|
||||||
@ -210,7 +210,7 @@ describe('GlobalConfigSchema', () => {
|
|||||||
it('should accept valid config', () => {
|
it('should accept valid config', () => {
|
||||||
const config = {
|
const config = {
|
||||||
trusted_directories: ['/home/user/projects'],
|
trusted_directories: ['/home/user/projects'],
|
||||||
default_workflow: 'custom',
|
default_piece: 'custom',
|
||||||
log_level: 'debug' as const,
|
log_level: 'debug' as const,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@ -3,12 +3,12 @@
|
|||||||
*
|
*
|
||||||
* Covers:
|
* Covers:
|
||||||
* - Schema validation for parallel sub-movements
|
* - Schema validation for parallel sub-movements
|
||||||
* - Workflow loader normalization of ai() conditions and parallel movements
|
* - Piece loader normalization of ai() conditions and parallel movements
|
||||||
* - Engine parallel movement aggregation logic
|
* - Engine parallel movement aggregation logic
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect } from 'vitest';
|
import { describe, it, expect } from 'vitest';
|
||||||
import { WorkflowConfigRawSchema, ParallelSubMovementRawSchema, WorkflowMovementRawSchema } from '../core/models/index.js';
|
import { PieceConfigRawSchema, ParallelSubMovementRawSchema, PieceMovementRawSchema } from '../core/models/index.js';
|
||||||
|
|
||||||
describe('ParallelSubMovementRawSchema', () => {
|
describe('ParallelSubMovementRawSchema', () => {
|
||||||
it('should validate a valid parallel sub-movement', () => {
|
it('should validate a valid parallel sub-movement', () => {
|
||||||
@ -73,7 +73,7 @@ describe('ParallelSubMovementRawSchema', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('WorkflowMovementRawSchema with parallel', () => {
|
describe('PieceMovementRawSchema with parallel', () => {
|
||||||
it('should accept a movement with parallel sub-movements (no agent)', () => {
|
it('should accept a movement with parallel sub-movements (no agent)', () => {
|
||||||
const raw = {
|
const raw = {
|
||||||
name: 'parallel-review',
|
name: 'parallel-review',
|
||||||
@ -86,7 +86,7 @@ describe('WorkflowMovementRawSchema with parallel', () => {
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = WorkflowMovementRawSchema.safeParse(raw);
|
const result = PieceMovementRawSchema.safeParse(raw);
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -96,7 +96,7 @@ describe('WorkflowMovementRawSchema with parallel', () => {
|
|||||||
instruction_template: 'Do something',
|
instruction_template: 'Do something',
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = WorkflowMovementRawSchema.safeParse(raw);
|
const result = PieceMovementRawSchema.safeParse(raw);
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -107,7 +107,7 @@ describe('WorkflowMovementRawSchema with parallel', () => {
|
|||||||
instruction_template: 'Code something',
|
instruction_template: 'Code something',
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = WorkflowMovementRawSchema.safeParse(raw);
|
const result = PieceMovementRawSchema.safeParse(raw);
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -117,15 +117,15 @@ describe('WorkflowMovementRawSchema with parallel', () => {
|
|||||||
parallel: [],
|
parallel: [],
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = WorkflowMovementRawSchema.safeParse(raw);
|
const result = PieceMovementRawSchema.safeParse(raw);
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('WorkflowConfigRawSchema with parallel movements', () => {
|
describe('PieceConfigRawSchema with parallel movements', () => {
|
||||||
it('should validate a workflow with parallel movement', () => {
|
it('should validate a piece with parallel movement', () => {
|
||||||
const raw = {
|
const raw = {
|
||||||
name: 'test-parallel-workflow',
|
name: 'test-parallel-piece',
|
||||||
movements: [
|
movements: [
|
||||||
{
|
{
|
||||||
name: 'plan',
|
name: 'plan',
|
||||||
@ -148,7 +148,7 @@ describe('WorkflowConfigRawSchema with parallel movements', () => {
|
|||||||
max_iterations: 10,
|
max_iterations: 10,
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = WorkflowConfigRawSchema.safeParse(raw);
|
const result = PieceConfigRawSchema.safeParse(raw);
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
if (result.success) {
|
if (result.success) {
|
||||||
expect(result.data.movements).toHaveLength(2);
|
expect(result.data.movements).toHaveLength(2);
|
||||||
@ -156,9 +156,9 @@ describe('WorkflowConfigRawSchema with parallel movements', () => {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should validate a workflow mixing normal and parallel movements', () => {
|
it('should validate a piece mixing normal and parallel movements', () => {
|
||||||
const raw = {
|
const raw = {
|
||||||
name: 'mixed-workflow',
|
name: 'mixed-piece',
|
||||||
movements: [
|
movements: [
|
||||||
{ name: 'plan', agent: 'planner.md', rules: [{ condition: 'Done', next: 'implement' }] },
|
{ name: 'plan', agent: 'planner.md', rules: [{ condition: 'Done', next: 'implement' }] },
|
||||||
{ name: 'implement', agent: 'coder.md', rules: [{ condition: 'Done', next: 'review' }] },
|
{ name: 'implement', agent: 'coder.md', rules: [{ condition: 'Done', next: 'review' }] },
|
||||||
@ -174,7 +174,7 @@ describe('WorkflowConfigRawSchema with parallel movements', () => {
|
|||||||
initial_movement: 'plan',
|
initial_movement: 'plan',
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = WorkflowConfigRawSchema.safeParse(raw);
|
const result = PieceConfigRawSchema.safeParse(raw);
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
if (result.success) {
|
if (result.success) {
|
||||||
expect(result.data.movements[0].agent).toBe('planner.md');
|
expect(result.data.movements[0].agent).toBe('planner.md');
|
||||||
@ -183,7 +183,7 @@ describe('WorkflowConfigRawSchema with parallel movements', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('ai() condition in WorkflowRuleSchema', () => {
|
describe('ai() condition in PieceRuleSchema', () => {
|
||||||
it('should accept ai() condition as a string', () => {
|
it('should accept ai() condition as a string', () => {
|
||||||
const raw = {
|
const raw = {
|
||||||
name: 'test-step',
|
name: 'test-step',
|
||||||
@ -194,7 +194,7 @@ describe('ai() condition in WorkflowRuleSchema', () => {
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = WorkflowMovementRawSchema.safeParse(raw);
|
const result = PieceMovementRawSchema.safeParse(raw);
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
if (result.success) {
|
if (result.success) {
|
||||||
expect(result.data.rules?.[0].condition).toBe('ai("All reviews approved")');
|
expect(result.data.rules?.[0].condition).toBe('ai("All reviews approved")');
|
||||||
@ -212,13 +212,13 @@ describe('ai() condition in WorkflowRuleSchema', () => {
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = WorkflowMovementRawSchema.safeParse(raw);
|
const result = PieceMovementRawSchema.safeParse(raw);
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('ai() condition regex parsing', () => {
|
describe('ai() condition regex parsing', () => {
|
||||||
// Test the regex pattern used in workflowLoader.ts
|
// Test the regex pattern used in pieceLoader.ts
|
||||||
const AI_CONDITION_REGEX = /^ai\("(.+)"\)$/;
|
const AI_CONDITION_REGEX = /^ai\("(.+)"\)$/;
|
||||||
|
|
||||||
it('should match simple ai() condition', () => {
|
it('should match simple ai() condition', () => {
|
||||||
@ -299,7 +299,7 @@ describe('all()/any() aggregate condition regex parsing', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('all()/any() condition in WorkflowMovementRawSchema', () => {
|
describe('all()/any() condition in PieceMovementRawSchema', () => {
|
||||||
it('should accept all() condition as a string', () => {
|
it('should accept all() condition as a string', () => {
|
||||||
const raw = {
|
const raw = {
|
||||||
name: 'parallel-review',
|
name: 'parallel-review',
|
||||||
@ -312,7 +312,7 @@ describe('all()/any() condition in WorkflowMovementRawSchema', () => {
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = WorkflowMovementRawSchema.safeParse(raw);
|
const result = PieceMovementRawSchema.safeParse(raw);
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
if (result.success) {
|
if (result.success) {
|
||||||
expect(result.data.rules?.[0].condition).toBe('all("approved")');
|
expect(result.data.rules?.[0].condition).toBe('all("approved")');
|
||||||
@ -333,7 +333,7 @@ describe('all()/any() condition in WorkflowMovementRawSchema', () => {
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
const result = WorkflowMovementRawSchema.safeParse(raw);
|
const result = PieceMovementRawSchema.safeParse(raw);
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@ -3,8 +3,8 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach } from 'vitest';
|
import { describe, it, expect, beforeEach } from 'vitest';
|
||||||
import { ParallelLogger } from '../core/workflow/index.js';
|
import { ParallelLogger } from '../core/piece/index.js';
|
||||||
import type { StreamEvent } from '../core/workflow/index.js';
|
import type { StreamEvent } from '../core/piece/index.js';
|
||||||
|
|
||||||
describe('ParallelLogger', () => {
|
describe('ParallelLogger', () => {
|
||||||
let output: string[];
|
let output: string[];
|
||||||
|
|||||||
@ -1,5 +1,5 @@
|
|||||||
/**
|
/**
|
||||||
* Tests for builtin workflow enable/disable flag
|
* Tests for builtin piece enable/disable flag
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
@ -13,20 +13,20 @@ vi.mock('../infra/config/global/globalConfig.js', async (importOriginal) => {
|
|||||||
...original,
|
...original,
|
||||||
getLanguage: () => 'en',
|
getLanguage: () => 'en',
|
||||||
getDisabledBuiltins: () => [],
|
getDisabledBuiltins: () => [],
|
||||||
getBuiltinWorkflowsEnabled: () => false,
|
getBuiltinPiecesEnabled: () => false,
|
||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
const { listWorkflows } = await import('../infra/config/loaders/workflowLoader.js');
|
const { listPieces } = await import('../infra/config/loaders/pieceLoader.js');
|
||||||
|
|
||||||
const SAMPLE_WORKFLOW = `name: test-workflow
|
const SAMPLE_PIECE = `name: test-piece
|
||||||
movements:
|
movements:
|
||||||
- name: step1
|
- name: step1
|
||||||
agent: coder
|
agent: coder
|
||||||
instruction: "{task}"
|
instruction: "{task}"
|
||||||
`;
|
`;
|
||||||
|
|
||||||
describe('builtin workflow toggle', () => {
|
describe('builtin piece toggle', () => {
|
||||||
let tempDir: string;
|
let tempDir: string;
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
@ -37,13 +37,13 @@ describe('builtin workflow toggle', () => {
|
|||||||
rmSync(tempDir, { recursive: true, force: true });
|
rmSync(tempDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should exclude builtin workflows when disabled', () => {
|
it('should exclude builtin pieces when disabled', () => {
|
||||||
const projectWorkflowsDir = join(tempDir, '.takt', 'workflows');
|
const projectPiecesDir = join(tempDir, '.takt', 'pieces');
|
||||||
mkdirSync(projectWorkflowsDir, { recursive: true });
|
mkdirSync(projectPiecesDir, { recursive: true });
|
||||||
writeFileSync(join(projectWorkflowsDir, 'project-custom.yaml'), SAMPLE_WORKFLOW);
|
writeFileSync(join(projectPiecesDir, 'project-custom.yaml'), SAMPLE_PIECE);
|
||||||
|
|
||||||
const workflows = listWorkflows(tempDir);
|
const pieces = listPieces(tempDir);
|
||||||
expect(workflows).toContain('project-custom');
|
expect(pieces).toContain('project-custom');
|
||||||
expect(workflows).not.toContain('default');
|
expect(pieces).not.toContain('default');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
303
src/__tests__/piece-categories.test.ts
Normal file
303
src/__tests__/piece-categories.test.ts
Normal file
@ -0,0 +1,303 @@
|
|||||||
|
/**
|
||||||
|
* Tests for piece category (subdirectory) support — Issue #85
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { mkdtempSync, writeFileSync, mkdirSync, rmSync } from 'node:fs';
|
||||||
|
import { join } from 'node:path';
|
||||||
|
import { tmpdir } from 'node:os';
|
||||||
|
import {
|
||||||
|
listPieces,
|
||||||
|
listPieceEntries,
|
||||||
|
loadAllPieces,
|
||||||
|
loadPiece,
|
||||||
|
} from '../infra/config/loaders/pieceLoader.js';
|
||||||
|
import type { PieceDirEntry } from '../infra/config/loaders/pieceLoader.js';
|
||||||
|
import {
|
||||||
|
buildPieceSelectionItems,
|
||||||
|
buildTopLevelSelectOptions,
|
||||||
|
parseCategorySelection,
|
||||||
|
buildCategoryPieceOptions,
|
||||||
|
type PieceSelectionItem,
|
||||||
|
} from '../features/pieceSelection/index.js';
|
||||||
|
|
||||||
|
const SAMPLE_PIECE = `name: test-piece
|
||||||
|
description: Test piece
|
||||||
|
initial_movement: step1
|
||||||
|
max_iterations: 1
|
||||||
|
|
||||||
|
movements:
|
||||||
|
- name: step1
|
||||||
|
agent: coder
|
||||||
|
instruction: "{task}"
|
||||||
|
`;
|
||||||
|
|
||||||
|
function createPiece(dir: string, name: string, content?: string): void {
|
||||||
|
writeFileSync(join(dir, `${name}.yaml`), content ?? SAMPLE_PIECE);
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('piece categories - directory scanning', () => {
|
||||||
|
let tempDir: string;
|
||||||
|
let piecesDir: string;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
tempDir = mkdtempSync(join(tmpdir(), 'takt-cat-test-'));
|
||||||
|
piecesDir = join(tempDir, '.takt', 'pieces');
|
||||||
|
mkdirSync(piecesDir, { recursive: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
rmSync(tempDir, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should discover root-level pieces', () => {
|
||||||
|
createPiece(piecesDir, 'simple');
|
||||||
|
createPiece(piecesDir, 'advanced');
|
||||||
|
|
||||||
|
const pieces = listPieces(tempDir);
|
||||||
|
expect(pieces).toContain('simple');
|
||||||
|
expect(pieces).toContain('advanced');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should discover pieces in subdirectories with category prefix', () => {
|
||||||
|
const frontendDir = join(piecesDir, 'frontend');
|
||||||
|
mkdirSync(frontendDir);
|
||||||
|
createPiece(frontendDir, 'react');
|
||||||
|
createPiece(frontendDir, 'vue');
|
||||||
|
|
||||||
|
const pieces = listPieces(tempDir);
|
||||||
|
expect(pieces).toContain('frontend/react');
|
||||||
|
expect(pieces).toContain('frontend/vue');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should discover both root-level and categorized pieces', () => {
|
||||||
|
createPiece(piecesDir, 'simple');
|
||||||
|
|
||||||
|
const frontendDir = join(piecesDir, 'frontend');
|
||||||
|
mkdirSync(frontendDir);
|
||||||
|
createPiece(frontendDir, 'react');
|
||||||
|
|
||||||
|
const backendDir = join(piecesDir, 'backend');
|
||||||
|
mkdirSync(backendDir);
|
||||||
|
createPiece(backendDir, 'api');
|
||||||
|
|
||||||
|
const pieces = listPieces(tempDir);
|
||||||
|
expect(pieces).toContain('simple');
|
||||||
|
expect(pieces).toContain('frontend/react');
|
||||||
|
expect(pieces).toContain('backend/api');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not scan deeper than 1 level', () => {
|
||||||
|
const deepDir = join(piecesDir, 'category', 'subcategory');
|
||||||
|
mkdirSync(deepDir, { recursive: true });
|
||||||
|
createPiece(deepDir, 'deep');
|
||||||
|
|
||||||
|
const pieces = listPieces(tempDir);
|
||||||
|
// category/subcategory should be treated as a directory entry, not scanned further
|
||||||
|
expect(pieces).not.toContain('category/subcategory/deep');
|
||||||
|
// Only 1-level: category/deep would not exist since deep.yaml is in subcategory
|
||||||
|
expect(pieces).not.toContain('deep');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('piece categories - listPieceEntries', () => {
|
||||||
|
let tempDir: string;
|
||||||
|
let piecesDir: string;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
tempDir = mkdtempSync(join(tmpdir(), 'takt-cat-test-'));
|
||||||
|
piecesDir = join(tempDir, '.takt', 'pieces');
|
||||||
|
mkdirSync(piecesDir, { recursive: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
rmSync(tempDir, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return entries with category information', () => {
|
||||||
|
createPiece(piecesDir, 'simple');
|
||||||
|
|
||||||
|
const frontendDir = join(piecesDir, 'frontend');
|
||||||
|
mkdirSync(frontendDir);
|
||||||
|
createPiece(frontendDir, 'react');
|
||||||
|
|
||||||
|
const entries = listPieceEntries(tempDir);
|
||||||
|
const simpleEntry = entries.find((e) => e.name === 'simple');
|
||||||
|
const reactEntry = entries.find((e) => e.name === 'frontend/react');
|
||||||
|
|
||||||
|
expect(simpleEntry).toBeDefined();
|
||||||
|
expect(simpleEntry!.category).toBeUndefined();
|
||||||
|
expect(simpleEntry!.source).toBe('project');
|
||||||
|
|
||||||
|
expect(reactEntry).toBeDefined();
|
||||||
|
expect(reactEntry!.category).toBe('frontend');
|
||||||
|
expect(reactEntry!.source).toBe('project');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('piece categories - loadAllPieces', () => {
|
||||||
|
let tempDir: string;
|
||||||
|
let piecesDir: string;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
tempDir = mkdtempSync(join(tmpdir(), 'takt-cat-test-'));
|
||||||
|
piecesDir = join(tempDir, '.takt', 'pieces');
|
||||||
|
mkdirSync(piecesDir, { recursive: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
rmSync(tempDir, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should load categorized pieces with qualified names as keys', () => {
|
||||||
|
const frontendDir = join(piecesDir, 'frontend');
|
||||||
|
mkdirSync(frontendDir);
|
||||||
|
createPiece(frontendDir, 'react');
|
||||||
|
|
||||||
|
const pieces = loadAllPieces(tempDir);
|
||||||
|
expect(pieces.has('frontend/react')).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('piece categories - loadPiece', () => {
|
||||||
|
let tempDir: string;
|
||||||
|
let piecesDir: string;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
tempDir = mkdtempSync(join(tmpdir(), 'takt-cat-test-'));
|
||||||
|
piecesDir = join(tempDir, '.takt', 'pieces');
|
||||||
|
mkdirSync(piecesDir, { recursive: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
rmSync(tempDir, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should load piece by category/name identifier', () => {
|
||||||
|
const frontendDir = join(piecesDir, 'frontend');
|
||||||
|
mkdirSync(frontendDir);
|
||||||
|
createPiece(frontendDir, 'react');
|
||||||
|
|
||||||
|
const piece = loadPiece('frontend/react', tempDir);
|
||||||
|
expect(piece).not.toBeNull();
|
||||||
|
expect(piece!.name).toBe('test-piece');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return null for non-existent category/name', () => {
|
||||||
|
const piece = loadPiece('nonexistent/piece', tempDir);
|
||||||
|
expect(piece).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should support .yml extension in subdirectories', () => {
|
||||||
|
const backendDir = join(piecesDir, 'backend');
|
||||||
|
mkdirSync(backendDir);
|
||||||
|
writeFileSync(join(backendDir, 'api.yml'), SAMPLE_PIECE);
|
||||||
|
|
||||||
|
const piece = loadPiece('backend/api', tempDir);
|
||||||
|
expect(piece).not.toBeNull();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('buildPieceSelectionItems', () => {
|
||||||
|
it('should separate root pieces and categories', () => {
|
||||||
|
const entries: PieceDirEntry[] = [
|
||||||
|
{ name: 'simple', path: '/tmp/simple.yaml', source: 'project' },
|
||||||
|
{ name: 'frontend/react', path: '/tmp/frontend/react.yaml', category: 'frontend', source: 'project' },
|
||||||
|
{ name: 'frontend/vue', path: '/tmp/frontend/vue.yaml', category: 'frontend', source: 'project' },
|
||||||
|
{ name: 'backend/api', path: '/tmp/backend/api.yaml', category: 'backend', source: 'project' },
|
||||||
|
];
|
||||||
|
|
||||||
|
const items = buildPieceSelectionItems(entries);
|
||||||
|
|
||||||
|
const pieces = items.filter((i) => i.type === 'piece');
|
||||||
|
const categories = items.filter((i) => i.type === 'category');
|
||||||
|
|
||||||
|
expect(pieces).toHaveLength(1);
|
||||||
|
expect(pieces[0]!.name).toBe('simple');
|
||||||
|
|
||||||
|
expect(categories).toHaveLength(2);
|
||||||
|
const frontend = categories.find((c) => c.name === 'frontend');
|
||||||
|
expect(frontend).toBeDefined();
|
||||||
|
expect(frontend!.type === 'category' && frontend!.pieces).toEqual(['frontend/react', 'frontend/vue']);
|
||||||
|
|
||||||
|
const backend = categories.find((c) => c.name === 'backend');
|
||||||
|
expect(backend).toBeDefined();
|
||||||
|
expect(backend!.type === 'category' && backend!.pieces).toEqual(['backend/api']);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should sort items alphabetically', () => {
|
||||||
|
const entries: PieceDirEntry[] = [
|
||||||
|
{ name: 'zebra', path: '/tmp/zebra.yaml', source: 'project' },
|
||||||
|
{ name: 'alpha', path: '/tmp/alpha.yaml', source: 'project' },
|
||||||
|
{ name: 'misc/playground', path: '/tmp/misc/playground.yaml', category: 'misc', source: 'project' },
|
||||||
|
];
|
||||||
|
|
||||||
|
const items = buildPieceSelectionItems(entries);
|
||||||
|
const names = items.map((i) => i.name);
|
||||||
|
expect(names).toEqual(['alpha', 'misc', 'zebra']);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return empty array for empty input', () => {
|
||||||
|
const items = buildPieceSelectionItems([]);
|
||||||
|
expect(items).toEqual([]);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('2-stage category selection helpers', () => {
|
||||||
|
const items: PieceSelectionItem[] = [
|
||||||
|
{ type: 'piece', name: 'simple' },
|
||||||
|
{ type: 'category', name: 'frontend', pieces: ['frontend/react', 'frontend/vue'] },
|
||||||
|
{ type: 'category', name: 'backend', pieces: ['backend/api'] },
|
||||||
|
];
|
||||||
|
|
||||||
|
describe('buildTopLevelSelectOptions', () => {
|
||||||
|
it('should encode categories with prefix in value', () => {
|
||||||
|
const options = buildTopLevelSelectOptions(items, '');
|
||||||
|
const categoryOption = options.find((o) => o.label.includes('frontend'));
|
||||||
|
expect(categoryOption).toBeDefined();
|
||||||
|
expect(categoryOption!.value).toBe('__category__:frontend');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should mark current piece', () => {
|
||||||
|
const options = buildTopLevelSelectOptions(items, 'simple');
|
||||||
|
const simpleOption = options.find((o) => o.value === 'simple');
|
||||||
|
expect(simpleOption!.label).toContain('(current)');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should mark category containing current piece', () => {
|
||||||
|
const options = buildTopLevelSelectOptions(items, 'frontend/react');
|
||||||
|
const frontendOption = options.find((o) => o.value === '__category__:frontend');
|
||||||
|
expect(frontendOption!.label).toContain('(current)');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('parseCategorySelection', () => {
|
||||||
|
it('should return category name for category selection', () => {
|
||||||
|
expect(parseCategorySelection('__category__:frontend')).toBe('frontend');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return null for direct piece selection', () => {
|
||||||
|
expect(parseCategorySelection('simple')).toBeNull();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('buildCategoryPieceOptions', () => {
|
||||||
|
it('should return options for pieces in a category', () => {
|
||||||
|
const options = buildCategoryPieceOptions(items, 'frontend', '');
|
||||||
|
expect(options).not.toBeNull();
|
||||||
|
expect(options).toHaveLength(2);
|
||||||
|
expect(options![0]!.value).toBe('frontend/react');
|
||||||
|
expect(options![0]!.label).toBe('react');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should mark current piece in category', () => {
|
||||||
|
const options = buildCategoryPieceOptions(items, 'frontend', 'frontend/vue');
|
||||||
|
const vueOption = options!.find((o) => o.value === 'frontend/vue');
|
||||||
|
expect(vueOption!.label).toContain('(current)');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return null for non-existent category', () => {
|
||||||
|
expect(buildCategoryPieceOptions(items, 'nonexistent', '')).toBeNull();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
@ -1,5 +1,5 @@
|
|||||||
/**
|
/**
|
||||||
* Tests for workflow category configuration loading and building
|
* Tests for piece category configuration loading and building
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
@ -7,7 +7,7 @@ import { mkdirSync, rmSync, writeFileSync } from 'node:fs';
|
|||||||
import { join } from 'node:path';
|
import { join } from 'node:path';
|
||||||
import { tmpdir } from 'node:os';
|
import { tmpdir } from 'node:os';
|
||||||
import { randomUUID } from 'node:crypto';
|
import { randomUUID } from 'node:crypto';
|
||||||
import type { WorkflowWithSource } from '../infra/config/index.js';
|
import type { PieceWithSource } from '../infra/config/index.js';
|
||||||
|
|
||||||
const pathsState = vi.hoisted(() => ({
|
const pathsState = vi.hoisted(() => ({
|
||||||
globalConfigPath: '',
|
globalConfigPath: '',
|
||||||
@ -32,7 +32,7 @@ vi.mock('../infra/resources/index.js', async (importOriginal) => {
|
|||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
const workflowCategoriesState = vi.hoisted(() => ({
|
const pieceCategoriesState = vi.hoisted(() => ({
|
||||||
categories: undefined as any,
|
categories: undefined as any,
|
||||||
showOthersCategory: undefined as boolean | undefined,
|
showOthersCategory: undefined as boolean | undefined,
|
||||||
othersCategoryName: undefined as string | undefined,
|
othersCategoryName: undefined as string | undefined,
|
||||||
@ -46,32 +46,32 @@ vi.mock('../infra/config/global/globalConfig.js', async (importOriginal) => {
|
|||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
vi.mock('../infra/config/global/workflowCategories.js', async (importOriginal) => {
|
vi.mock('../infra/config/global/pieceCategories.js', async (importOriginal) => {
|
||||||
const original = await importOriginal() as Record<string, unknown>;
|
const original = await importOriginal() as Record<string, unknown>;
|
||||||
return {
|
return {
|
||||||
...original,
|
...original,
|
||||||
getWorkflowCategoriesConfig: () => workflowCategoriesState.categories,
|
getPieceCategoriesConfig: () => pieceCategoriesState.categories,
|
||||||
getShowOthersCategory: () => workflowCategoriesState.showOthersCategory,
|
getShowOthersCategory: () => pieceCategoriesState.showOthersCategory,
|
||||||
getOthersCategoryName: () => workflowCategoriesState.othersCategoryName,
|
getOthersCategoryName: () => pieceCategoriesState.othersCategoryName,
|
||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
const {
|
const {
|
||||||
getWorkflowCategories,
|
getPieceCategories,
|
||||||
loadDefaultCategories,
|
loadDefaultCategories,
|
||||||
buildCategorizedWorkflows,
|
buildCategorizedPieces,
|
||||||
findWorkflowCategories,
|
findPieceCategories,
|
||||||
} = await import('../infra/config/loaders/workflowCategories.js');
|
} = await import('../infra/config/loaders/pieceCategories.js');
|
||||||
|
|
||||||
function writeYaml(path: string, content: string): void {
|
function writeYaml(path: string, content: string): void {
|
||||||
writeFileSync(path, content.trim() + '\n', 'utf-8');
|
writeFileSync(path, content.trim() + '\n', 'utf-8');
|
||||||
}
|
}
|
||||||
|
|
||||||
function createWorkflowMap(entries: { name: string; source: 'builtin' | 'user' | 'project' }[]):
|
function createPieceMap(entries: { name: string; source: 'builtin' | 'user' | 'project' }[]):
|
||||||
Map<string, WorkflowWithSource> {
|
Map<string, PieceWithSource> {
|
||||||
const workflows = new Map<string, WorkflowWithSource>();
|
const pieces = new Map<string, PieceWithSource>();
|
||||||
for (const entry of entries) {
|
for (const entry of entries) {
|
||||||
workflows.set(entry.name, {
|
pieces.set(entry.name, {
|
||||||
source: entry.source,
|
source: entry.source,
|
||||||
config: {
|
config: {
|
||||||
name: entry.name,
|
name: entry.name,
|
||||||
@ -81,10 +81,10 @@ function createWorkflowMap(entries: { name: string; source: 'builtin' | 'user' |
|
|||||||
},
|
},
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
return workflows;
|
return pieces;
|
||||||
}
|
}
|
||||||
|
|
||||||
describe('workflow category config loading', () => {
|
describe('piece category config loading', () => {
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
let resourcesDir: string;
|
let resourcesDir: string;
|
||||||
let globalConfigPath: string;
|
let globalConfigPath: string;
|
||||||
@ -101,91 +101,91 @@ describe('workflow category config loading', () => {
|
|||||||
pathsState.projectConfigPath = projectConfigPath;
|
pathsState.projectConfigPath = projectConfigPath;
|
||||||
pathsState.resourcesDir = resourcesDir;
|
pathsState.resourcesDir = resourcesDir;
|
||||||
|
|
||||||
// Reset workflow categories state
|
// Reset piece categories state
|
||||||
workflowCategoriesState.categories = undefined;
|
pieceCategoriesState.categories = undefined;
|
||||||
workflowCategoriesState.showOthersCategory = undefined;
|
pieceCategoriesState.showOthersCategory = undefined;
|
||||||
workflowCategoriesState.othersCategoryName = undefined;
|
pieceCategoriesState.othersCategoryName = undefined;
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
rmSync(testDir, { recursive: true, force: true });
|
rmSync(testDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should load default categories when no configs define workflow_categories', () => {
|
it('should load default categories when no configs define piece_categories', () => {
|
||||||
writeYaml(join(resourcesDir, 'default-categories.yaml'), `
|
writeYaml(join(resourcesDir, 'default-categories.yaml'), `
|
||||||
workflow_categories:
|
piece_categories:
|
||||||
Default:
|
Default:
|
||||||
workflows:
|
pieces:
|
||||||
- simple
|
- simple
|
||||||
show_others_category: true
|
show_others_category: true
|
||||||
others_category_name: "Others"
|
others_category_name: "Others"
|
||||||
`);
|
`);
|
||||||
|
|
||||||
const config = getWorkflowCategories(testDir);
|
const config = getPieceCategories(testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
expect(config!.workflowCategories).toEqual([
|
expect(config!.pieceCategories).toEqual([
|
||||||
{ name: 'Default', workflows: ['simple'], children: [] },
|
{ name: 'Default', pieces: ['simple'], children: [] },
|
||||||
]);
|
]);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should prefer project config over default when workflow_categories is defined', () => {
|
it('should prefer project config over default when piece_categories is defined', () => {
|
||||||
writeYaml(join(resourcesDir, 'default-categories.yaml'), `
|
writeYaml(join(resourcesDir, 'default-categories.yaml'), `
|
||||||
workflow_categories:
|
piece_categories:
|
||||||
Default:
|
Default:
|
||||||
workflows:
|
pieces:
|
||||||
- simple
|
- simple
|
||||||
`);
|
`);
|
||||||
|
|
||||||
writeYaml(projectConfigPath, `
|
writeYaml(projectConfigPath, `
|
||||||
workflow_categories:
|
piece_categories:
|
||||||
Project:
|
Project:
|
||||||
workflows:
|
pieces:
|
||||||
- custom
|
- custom
|
||||||
show_others_category: false
|
show_others_category: false
|
||||||
`);
|
`);
|
||||||
|
|
||||||
const config = getWorkflowCategories(testDir);
|
const config = getPieceCategories(testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
expect(config!.workflowCategories).toEqual([
|
expect(config!.pieceCategories).toEqual([
|
||||||
{ name: 'Project', workflows: ['custom'], children: [] },
|
{ name: 'Project', pieces: ['custom'], children: [] },
|
||||||
]);
|
]);
|
||||||
expect(config!.showOthersCategory).toBe(false);
|
expect(config!.showOthersCategory).toBe(false);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should prefer user config over project config when workflow_categories is defined', () => {
|
it('should prefer user config over project config when piece_categories is defined', () => {
|
||||||
writeYaml(join(resourcesDir, 'default-categories.yaml'), `
|
writeYaml(join(resourcesDir, 'default-categories.yaml'), `
|
||||||
workflow_categories:
|
piece_categories:
|
||||||
Default:
|
Default:
|
||||||
workflows:
|
pieces:
|
||||||
- simple
|
- simple
|
||||||
`);
|
`);
|
||||||
|
|
||||||
writeYaml(projectConfigPath, `
|
writeYaml(projectConfigPath, `
|
||||||
workflow_categories:
|
piece_categories:
|
||||||
Project:
|
Project:
|
||||||
workflows:
|
pieces:
|
||||||
- custom
|
- custom
|
||||||
`);
|
`);
|
||||||
|
|
||||||
// Simulate user config from separate file
|
// Simulate user config from separate file
|
||||||
workflowCategoriesState.categories = {
|
pieceCategoriesState.categories = {
|
||||||
User: {
|
User: {
|
||||||
workflows: ['preferred'],
|
pieces: ['preferred'],
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
const config = getWorkflowCategories(testDir);
|
const config = getPieceCategories(testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
expect(config!.workflowCategories).toEqual([
|
expect(config!.pieceCategories).toEqual([
|
||||||
{ name: 'User', workflows: ['preferred'], children: [] },
|
{ name: 'User', pieces: ['preferred'], children: [] },
|
||||||
]);
|
]);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should ignore configs without workflow_categories and fall back to default', () => {
|
it('should ignore configs without piece_categories and fall back to default', () => {
|
||||||
writeYaml(join(resourcesDir, 'default-categories.yaml'), `
|
writeYaml(join(resourcesDir, 'default-categories.yaml'), `
|
||||||
workflow_categories:
|
piece_categories:
|
||||||
Default:
|
Default:
|
||||||
workflows:
|
pieces:
|
||||||
- simple
|
- simple
|
||||||
`);
|
`);
|
||||||
|
|
||||||
@ -193,10 +193,10 @@ workflow_categories:
|
|||||||
show_others_category: false
|
show_others_category: false
|
||||||
`);
|
`);
|
||||||
|
|
||||||
const config = getWorkflowCategories(testDir);
|
const config = getPieceCategories(testDir);
|
||||||
expect(config).not.toBeNull();
|
expect(config).not.toBeNull();
|
||||||
expect(config!.workflowCategories).toEqual([
|
expect(config!.pieceCategories).toEqual([
|
||||||
{ name: 'Default', workflows: ['simple'], children: [] },
|
{ name: 'Default', pieces: ['simple'], children: [] },
|
||||||
]);
|
]);
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -206,18 +206,18 @@ show_others_category: false
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('buildCategorizedWorkflows', () => {
|
describe('buildCategorizedPieces', () => {
|
||||||
it('should warn for missing workflows and generate Others', () => {
|
it('should warn for missing pieces and generate Others', () => {
|
||||||
const allWorkflows = createWorkflowMap([
|
const allPieces = createPieceMap([
|
||||||
{ name: 'a', source: 'user' },
|
{ name: 'a', source: 'user' },
|
||||||
{ name: 'b', source: 'user' },
|
{ name: 'b', source: 'user' },
|
||||||
{ name: 'c', source: 'builtin' },
|
{ name: 'c', source: 'builtin' },
|
||||||
]);
|
]);
|
||||||
const config = {
|
const config = {
|
||||||
workflowCategories: [
|
pieceCategories: [
|
||||||
{
|
{
|
||||||
name: 'Cat',
|
name: 'Cat',
|
||||||
workflows: ['a', 'missing', 'c'],
|
pieces: ['a', 'missing', 'c'],
|
||||||
children: [],
|
children: [],
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
@ -225,43 +225,43 @@ describe('buildCategorizedWorkflows', () => {
|
|||||||
othersCategoryName: 'Others',
|
othersCategoryName: 'Others',
|
||||||
};
|
};
|
||||||
|
|
||||||
const categorized = buildCategorizedWorkflows(allWorkflows, config);
|
const categorized = buildCategorizedPieces(allPieces, config);
|
||||||
expect(categorized.categories).toEqual([
|
expect(categorized.categories).toEqual([
|
||||||
{ name: 'Cat', workflows: ['a'], children: [] },
|
{ name: 'Cat', pieces: ['a'], children: [] },
|
||||||
{ name: 'Others', workflows: ['b'], children: [] },
|
{ name: 'Others', pieces: ['b'], children: [] },
|
||||||
]);
|
]);
|
||||||
expect(categorized.builtinCategories).toEqual([
|
expect(categorized.builtinCategories).toEqual([
|
||||||
{ name: 'Cat', workflows: ['c'], children: [] },
|
{ name: 'Cat', pieces: ['c'], children: [] },
|
||||||
]);
|
]);
|
||||||
expect(categorized.missingWorkflows).toEqual([
|
expect(categorized.missingPieces).toEqual([
|
||||||
{ categoryPath: ['Cat'], workflowName: 'missing' },
|
{ categoryPath: ['Cat'], pieceName: 'missing' },
|
||||||
]);
|
]);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should skip empty categories', () => {
|
it('should skip empty categories', () => {
|
||||||
const allWorkflows = createWorkflowMap([
|
const allPieces = createPieceMap([
|
||||||
{ name: 'a', source: 'user' },
|
{ name: 'a', source: 'user' },
|
||||||
]);
|
]);
|
||||||
const config = {
|
const config = {
|
||||||
workflowCategories: [
|
pieceCategories: [
|
||||||
{ name: 'Empty', workflows: [], children: [] },
|
{ name: 'Empty', pieces: [], children: [] },
|
||||||
],
|
],
|
||||||
showOthersCategory: false,
|
showOthersCategory: false,
|
||||||
othersCategoryName: 'Others',
|
othersCategoryName: 'Others',
|
||||||
};
|
};
|
||||||
|
|
||||||
const categorized = buildCategorizedWorkflows(allWorkflows, config);
|
const categorized = buildCategorizedPieces(allPieces, config);
|
||||||
expect(categorized.categories).toEqual([]);
|
expect(categorized.categories).toEqual([]);
|
||||||
expect(categorized.builtinCategories).toEqual([]);
|
expect(categorized.builtinCategories).toEqual([]);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should find categories containing a workflow', () => {
|
it('should find categories containing a piece', () => {
|
||||||
const categories = [
|
const categories = [
|
||||||
{ name: 'A', workflows: ['shared'], children: [] },
|
{ name: 'A', pieces: ['shared'], children: [] },
|
||||||
{ name: 'B', workflows: ['shared'], children: [] },
|
{ name: 'B', pieces: ['shared'], children: [] },
|
||||||
];
|
];
|
||||||
|
|
||||||
const paths = findWorkflowCategories('shared', categories).sort();
|
const paths = findPieceCategories('shared', categories).sort();
|
||||||
expect(paths).toEqual(['A', 'B']);
|
expect(paths).toEqual(['A', 'B']);
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -269,14 +269,14 @@ describe('buildCategorizedWorkflows', () => {
|
|||||||
const categories = [
|
const categories = [
|
||||||
{
|
{
|
||||||
name: 'Parent',
|
name: 'Parent',
|
||||||
workflows: [],
|
pieces: [],
|
||||||
children: [
|
children: [
|
||||||
{ name: 'Child', workflows: ['nested'], children: [] },
|
{ name: 'Child', pieces: ['nested'], children: [] },
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
];
|
];
|
||||||
|
|
||||||
const paths = findWorkflowCategories('nested', categories);
|
const paths = findPieceCategories('nested', categories);
|
||||||
expect(paths).toEqual(['Parent / Child']);
|
expect(paths).toEqual(['Parent / Child']);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
@ -1,8 +1,8 @@
|
|||||||
/**
|
/**
|
||||||
* Tests for expert/expert-cqrs workflow parallel review structure.
|
* Tests for expert/expert-cqrs piece parallel review structure.
|
||||||
*
|
*
|
||||||
* Validates that:
|
* Validates that:
|
||||||
* - expert and expert-cqrs workflows load successfully via loadWorkflow
|
* - expert and expert-cqrs pieces load successfully via loadPiece
|
||||||
* - The reviewers movement is a parallel movement with expected sub-movements
|
* - The reviewers movement is a parallel movement with expected sub-movements
|
||||||
* - ai_review routes to reviewers (not individual review movements)
|
* - ai_review routes to reviewers (not individual review movements)
|
||||||
* - fix movement routes back to reviewers
|
* - fix movement routes back to reviewers
|
||||||
@ -11,25 +11,25 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect } from 'vitest';
|
import { describe, it, expect } from 'vitest';
|
||||||
import { loadWorkflow } from '../infra/config/index.js';
|
import { loadPiece } from '../infra/config/index.js';
|
||||||
|
|
||||||
describe('expert workflow parallel structure', () => {
|
describe('expert piece parallel structure', () => {
|
||||||
const workflow = loadWorkflow('expert', process.cwd());
|
const piece = loadPiece('expert', process.cwd());
|
||||||
|
|
||||||
it('should load successfully', () => {
|
it('should load successfully', () => {
|
||||||
expect(workflow).not.toBeNull();
|
expect(piece).not.toBeNull();
|
||||||
expect(workflow!.name).toBe('expert');
|
expect(piece!.name).toBe('expert');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should have a reviewers parallel movement', () => {
|
it('should have a reviewers parallel movement', () => {
|
||||||
const reviewers = workflow!.movements.find((s) => s.name === 'reviewers');
|
const reviewers = piece!.movements.find((s) => s.name === 'reviewers');
|
||||||
expect(reviewers).toBeDefined();
|
expect(reviewers).toBeDefined();
|
||||||
expect(reviewers!.parallel).toBeDefined();
|
expect(reviewers!.parallel).toBeDefined();
|
||||||
expect(reviewers!.parallel!.length).toBe(4);
|
expect(reviewers!.parallel!.length).toBe(4);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should have arch-review, frontend-review, security-review, qa-review as sub-movements', () => {
|
it('should have arch-review, frontend-review, security-review, qa-review as sub-movements', () => {
|
||||||
const reviewers = workflow!.movements.find((s) => s.name === 'reviewers');
|
const reviewers = piece!.movements.find((s) => s.name === 'reviewers');
|
||||||
const subNames = reviewers!.parallel!.map((s) => s.name);
|
const subNames = reviewers!.parallel!.map((s) => s.name);
|
||||||
expect(subNames).toContain('arch-review');
|
expect(subNames).toContain('arch-review');
|
||||||
expect(subNames).toContain('frontend-review');
|
expect(subNames).toContain('frontend-review');
|
||||||
@ -38,7 +38,7 @@ describe('expert workflow parallel structure', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should have aggregate rules on reviewers movement', () => {
|
it('should have aggregate rules on reviewers movement', () => {
|
||||||
const reviewers = workflow!.movements.find((s) => s.name === 'reviewers');
|
const reviewers = piece!.movements.find((s) => s.name === 'reviewers');
|
||||||
expect(reviewers!.rules).toBeDefined();
|
expect(reviewers!.rules).toBeDefined();
|
||||||
const conditions = reviewers!.rules!.map((r) => r.condition);
|
const conditions = reviewers!.rules!.map((r) => r.condition);
|
||||||
expect(conditions).toContain('all("approved")');
|
expect(conditions).toContain('all("approved")');
|
||||||
@ -46,7 +46,7 @@ describe('expert workflow parallel structure', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should have simple approved/needs_fix rules on each sub-movement', () => {
|
it('should have simple approved/needs_fix rules on each sub-movement', () => {
|
||||||
const reviewers = workflow!.movements.find((s) => s.name === 'reviewers');
|
const reviewers = piece!.movements.find((s) => s.name === 'reviewers');
|
||||||
for (const sub of reviewers!.parallel!) {
|
for (const sub of reviewers!.parallel!) {
|
||||||
expect(sub.rules).toBeDefined();
|
expect(sub.rules).toBeDefined();
|
||||||
const conditions = sub.rules!.map((r) => r.condition);
|
const conditions = sub.rules!.map((r) => r.condition);
|
||||||
@ -56,21 +56,21 @@ describe('expert workflow parallel structure', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should route ai_review to reviewers', () => {
|
it('should route ai_review to reviewers', () => {
|
||||||
const aiReview = workflow!.movements.find((s) => s.name === 'ai_review');
|
const aiReview = piece!.movements.find((s) => s.name === 'ai_review');
|
||||||
expect(aiReview).toBeDefined();
|
expect(aiReview).toBeDefined();
|
||||||
const approvedRule = aiReview!.rules!.find((r) => r.next === 'reviewers');
|
const approvedRule = aiReview!.rules!.find((r) => r.next === 'reviewers');
|
||||||
expect(approvedRule).toBeDefined();
|
expect(approvedRule).toBeDefined();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should have a unified fix movement routing back to reviewers', () => {
|
it('should have a unified fix movement routing back to reviewers', () => {
|
||||||
const fix = workflow!.movements.find((s) => s.name === 'fix');
|
const fix = piece!.movements.find((s) => s.name === 'fix');
|
||||||
expect(fix).toBeDefined();
|
expect(fix).toBeDefined();
|
||||||
const fixComplete = fix!.rules!.find((r) => r.next === 'reviewers');
|
const fixComplete = fix!.rules!.find((r) => r.next === 'reviewers');
|
||||||
expect(fixComplete).toBeDefined();
|
expect(fixComplete).toBeDefined();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should not have individual review/fix movements', () => {
|
it('should not have individual review/fix movements', () => {
|
||||||
const movementNames = workflow!.movements.map((s) => s.name);
|
const movementNames = piece!.movements.map((s) => s.name);
|
||||||
expect(movementNames).not.toContain('architect_review');
|
expect(movementNames).not.toContain('architect_review');
|
||||||
expect(movementNames).not.toContain('fix_architect');
|
expect(movementNames).not.toContain('fix_architect');
|
||||||
expect(movementNames).not.toContain('frontend_review');
|
expect(movementNames).not.toContain('frontend_review');
|
||||||
@ -82,35 +82,35 @@ describe('expert workflow parallel structure', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should route reviewers all("approved") to supervise', () => {
|
it('should route reviewers all("approved") to supervise', () => {
|
||||||
const reviewers = workflow!.movements.find((s) => s.name === 'reviewers');
|
const reviewers = piece!.movements.find((s) => s.name === 'reviewers');
|
||||||
const approvedRule = reviewers!.rules!.find((r) => r.condition === 'all("approved")');
|
const approvedRule = reviewers!.rules!.find((r) => r.condition === 'all("approved")');
|
||||||
expect(approvedRule!.next).toBe('supervise');
|
expect(approvedRule!.next).toBe('supervise');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should route reviewers any("needs_fix") to fix', () => {
|
it('should route reviewers any("needs_fix") to fix', () => {
|
||||||
const reviewers = workflow!.movements.find((s) => s.name === 'reviewers');
|
const reviewers = piece!.movements.find((s) => s.name === 'reviewers');
|
||||||
const needsFixRule = reviewers!.rules!.find((r) => r.condition === 'any("needs_fix")');
|
const needsFixRule = reviewers!.rules!.find((r) => r.condition === 'any("needs_fix")');
|
||||||
expect(needsFixRule!.next).toBe('fix');
|
expect(needsFixRule!.next).toBe('fix');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('expert-cqrs workflow parallel structure', () => {
|
describe('expert-cqrs piece parallel structure', () => {
|
||||||
const workflow = loadWorkflow('expert-cqrs', process.cwd());
|
const piece = loadPiece('expert-cqrs', process.cwd());
|
||||||
|
|
||||||
it('should load successfully', () => {
|
it('should load successfully', () => {
|
||||||
expect(workflow).not.toBeNull();
|
expect(piece).not.toBeNull();
|
||||||
expect(workflow!.name).toBe('expert-cqrs');
|
expect(piece!.name).toBe('expert-cqrs');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should have a reviewers parallel movement', () => {
|
it('should have a reviewers parallel movement', () => {
|
||||||
const reviewers = workflow!.movements.find((s) => s.name === 'reviewers');
|
const reviewers = piece!.movements.find((s) => s.name === 'reviewers');
|
||||||
expect(reviewers).toBeDefined();
|
expect(reviewers).toBeDefined();
|
||||||
expect(reviewers!.parallel).toBeDefined();
|
expect(reviewers!.parallel).toBeDefined();
|
||||||
expect(reviewers!.parallel!.length).toBe(4);
|
expect(reviewers!.parallel!.length).toBe(4);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should have cqrs-es-review instead of arch-review', () => {
|
it('should have cqrs-es-review instead of arch-review', () => {
|
||||||
const reviewers = workflow!.movements.find((s) => s.name === 'reviewers');
|
const reviewers = piece!.movements.find((s) => s.name === 'reviewers');
|
||||||
const subNames = reviewers!.parallel!.map((s) => s.name);
|
const subNames = reviewers!.parallel!.map((s) => s.name);
|
||||||
expect(subNames).toContain('cqrs-es-review');
|
expect(subNames).toContain('cqrs-es-review');
|
||||||
expect(subNames).not.toContain('arch-review');
|
expect(subNames).not.toContain('arch-review');
|
||||||
@ -120,7 +120,7 @@ describe('expert-cqrs workflow parallel structure', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should have aggregate rules on reviewers movement', () => {
|
it('should have aggregate rules on reviewers movement', () => {
|
||||||
const reviewers = workflow!.movements.find((s) => s.name === 'reviewers');
|
const reviewers = piece!.movements.find((s) => s.name === 'reviewers');
|
||||||
expect(reviewers!.rules).toBeDefined();
|
expect(reviewers!.rules).toBeDefined();
|
||||||
const conditions = reviewers!.rules!.map((r) => r.condition);
|
const conditions = reviewers!.rules!.map((r) => r.condition);
|
||||||
expect(conditions).toContain('all("approved")');
|
expect(conditions).toContain('all("approved")');
|
||||||
@ -128,7 +128,7 @@ describe('expert-cqrs workflow parallel structure', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should have simple approved/needs_fix rules on each sub-movement', () => {
|
it('should have simple approved/needs_fix rules on each sub-movement', () => {
|
||||||
const reviewers = workflow!.movements.find((s) => s.name === 'reviewers');
|
const reviewers = piece!.movements.find((s) => s.name === 'reviewers');
|
||||||
for (const sub of reviewers!.parallel!) {
|
for (const sub of reviewers!.parallel!) {
|
||||||
expect(sub.rules).toBeDefined();
|
expect(sub.rules).toBeDefined();
|
||||||
const conditions = sub.rules!.map((r) => r.condition);
|
const conditions = sub.rules!.map((r) => r.condition);
|
||||||
@ -138,21 +138,21 @@ describe('expert-cqrs workflow parallel structure', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should route ai_review to reviewers', () => {
|
it('should route ai_review to reviewers', () => {
|
||||||
const aiReview = workflow!.movements.find((s) => s.name === 'ai_review');
|
const aiReview = piece!.movements.find((s) => s.name === 'ai_review');
|
||||||
expect(aiReview).toBeDefined();
|
expect(aiReview).toBeDefined();
|
||||||
const approvedRule = aiReview!.rules!.find((r) => r.next === 'reviewers');
|
const approvedRule = aiReview!.rules!.find((r) => r.next === 'reviewers');
|
||||||
expect(approvedRule).toBeDefined();
|
expect(approvedRule).toBeDefined();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should have a unified fix movement routing back to reviewers', () => {
|
it('should have a unified fix movement routing back to reviewers', () => {
|
||||||
const fix = workflow!.movements.find((s) => s.name === 'fix');
|
const fix = piece!.movements.find((s) => s.name === 'fix');
|
||||||
expect(fix).toBeDefined();
|
expect(fix).toBeDefined();
|
||||||
const fixComplete = fix!.rules!.find((r) => r.next === 'reviewers');
|
const fixComplete = fix!.rules!.find((r) => r.next === 'reviewers');
|
||||||
expect(fixComplete).toBeDefined();
|
expect(fixComplete).toBeDefined();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should not have individual review/fix movements', () => {
|
it('should not have individual review/fix movements', () => {
|
||||||
const movementNames = workflow!.movements.map((s) => s.name);
|
const movementNames = piece!.movements.map((s) => s.name);
|
||||||
expect(movementNames).not.toContain('cqrs_es_review');
|
expect(movementNames).not.toContain('cqrs_es_review');
|
||||||
expect(movementNames).not.toContain('fix_cqrs_es');
|
expect(movementNames).not.toContain('fix_cqrs_es');
|
||||||
expect(movementNames).not.toContain('frontend_review');
|
expect(movementNames).not.toContain('frontend_review');
|
||||||
@ -164,7 +164,7 @@ describe('expert-cqrs workflow parallel structure', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should use cqrs-es-reviewer agent for the first sub-movement', () => {
|
it('should use cqrs-es-reviewer agent for the first sub-movement', () => {
|
||||||
const reviewers = workflow!.movements.find((s) => s.name === 'reviewers');
|
const reviewers = piece!.movements.find((s) => s.name === 'reviewers');
|
||||||
const cqrsReview = reviewers!.parallel!.find((s) => s.name === 'cqrs-es-review');
|
const cqrsReview = reviewers!.parallel!.find((s) => s.name === 'cqrs-es-review');
|
||||||
expect(cqrsReview!.agent).toContain('cqrs-es-reviewer');
|
expect(cqrsReview!.agent).toContain('cqrs-es-reviewer');
|
||||||
});
|
});
|
||||||
@ -1,9 +1,9 @@
|
|||||||
/**
|
/**
|
||||||
* Tests for workflow selection helpers
|
* Tests for piece selection helpers
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||||
import type { WorkflowDirEntry } from '../infra/config/loaders/workflowLoader.js';
|
import type { PieceDirEntry } from '../infra/config/loaders/pieceLoader.js';
|
||||||
|
|
||||||
const selectOptionMock = vi.fn();
|
const selectOptionMock = vi.fn();
|
||||||
|
|
||||||
@ -12,19 +12,19 @@ vi.mock('../shared/prompt/index.js', () => ({
|
|||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../infra/config/global/index.js', () => ({
|
vi.mock('../infra/config/global/index.js', () => ({
|
||||||
getBookmarkedWorkflows: () => [],
|
getBookmarkedPieces: () => [],
|
||||||
toggleBookmark: vi.fn(),
|
toggleBookmark: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
const { selectWorkflowFromEntries } = await import('../features/workflowSelection/index.js');
|
const { selectPieceFromEntries } = await import('../features/pieceSelection/index.js');
|
||||||
|
|
||||||
describe('selectWorkflowFromEntries', () => {
|
describe('selectPieceFromEntries', () => {
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
selectOptionMock.mockReset();
|
selectOptionMock.mockReset();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should select from custom workflows when source is chosen', async () => {
|
it('should select from custom pieces when source is chosen', async () => {
|
||||||
const entries: WorkflowDirEntry[] = [
|
const entries: PieceDirEntry[] = [
|
||||||
{ name: 'custom-flow', path: '/tmp/custom.yaml', source: 'user' },
|
{ name: 'custom-flow', path: '/tmp/custom.yaml', source: 'user' },
|
||||||
{ name: 'builtin-flow', path: '/tmp/builtin.yaml', source: 'builtin' },
|
{ name: 'builtin-flow', path: '/tmp/builtin.yaml', source: 'builtin' },
|
||||||
];
|
];
|
||||||
@ -33,19 +33,19 @@ describe('selectWorkflowFromEntries', () => {
|
|||||||
.mockResolvedValueOnce('custom')
|
.mockResolvedValueOnce('custom')
|
||||||
.mockResolvedValueOnce('custom-flow');
|
.mockResolvedValueOnce('custom-flow');
|
||||||
|
|
||||||
const selected = await selectWorkflowFromEntries(entries, '');
|
const selected = await selectPieceFromEntries(entries, '');
|
||||||
expect(selected).toBe('custom-flow');
|
expect(selected).toBe('custom-flow');
|
||||||
expect(selectOptionMock).toHaveBeenCalledTimes(2);
|
expect(selectOptionMock).toHaveBeenCalledTimes(2);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should skip source selection when only builtin workflows exist', async () => {
|
it('should skip source selection when only builtin pieces exist', async () => {
|
||||||
const entries: WorkflowDirEntry[] = [
|
const entries: PieceDirEntry[] = [
|
||||||
{ name: 'builtin-flow', path: '/tmp/builtin.yaml', source: 'builtin' },
|
{ name: 'builtin-flow', path: '/tmp/builtin.yaml', source: 'builtin' },
|
||||||
];
|
];
|
||||||
|
|
||||||
selectOptionMock.mockResolvedValueOnce('builtin-flow');
|
selectOptionMock.mockResolvedValueOnce('builtin-flow');
|
||||||
|
|
||||||
const selected = await selectWorkflowFromEntries(entries, '');
|
const selected = await selectPieceFromEntries(entries, '');
|
||||||
expect(selected).toBe('builtin-flow');
|
expect(selected).toBe('builtin-flow');
|
||||||
expect(selectOptionMock).toHaveBeenCalledTimes(1);
|
expect(selectOptionMock).toHaveBeenCalledTimes(1);
|
||||||
});
|
});
|
||||||
189
src/__tests__/pieceLoader.test.ts
Normal file
189
src/__tests__/pieceLoader.test.ts
Normal file
@ -0,0 +1,189 @@
|
|||||||
|
/**
|
||||||
|
* Tests for isPiecePath and loadPieceByIdentifier
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { mkdtempSync, writeFileSync, mkdirSync, rmSync } from 'node:fs';
|
||||||
|
import { join } from 'node:path';
|
||||||
|
import { tmpdir } from 'node:os';
|
||||||
|
import {
|
||||||
|
isPiecePath,
|
||||||
|
loadPieceByIdentifier,
|
||||||
|
listPieces,
|
||||||
|
loadAllPieces,
|
||||||
|
} from '../infra/config/loaders/pieceLoader.js';
|
||||||
|
|
||||||
|
const SAMPLE_PIECE = `name: test-piece
|
||||||
|
description: Test piece
|
||||||
|
initial_movement: step1
|
||||||
|
max_iterations: 1
|
||||||
|
|
||||||
|
movements:
|
||||||
|
- name: step1
|
||||||
|
agent: coder
|
||||||
|
instruction: "{task}"
|
||||||
|
`;
|
||||||
|
|
||||||
|
describe('isPiecePath', () => {
|
||||||
|
it('should return true for absolute paths', () => {
|
||||||
|
expect(isPiecePath('/path/to/piece.yaml')).toBe(true);
|
||||||
|
expect(isPiecePath('/piece')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true for home directory paths', () => {
|
||||||
|
expect(isPiecePath('~/piece.yaml')).toBe(true);
|
||||||
|
expect(isPiecePath('~/.takt/pieces/custom.yaml')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true for relative paths starting with ./', () => {
|
||||||
|
expect(isPiecePath('./piece.yaml')).toBe(true);
|
||||||
|
expect(isPiecePath('./subdir/piece.yaml')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true for relative paths starting with ../', () => {
|
||||||
|
expect(isPiecePath('../piece.yaml')).toBe(true);
|
||||||
|
expect(isPiecePath('../subdir/piece.yaml')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true for paths ending with .yaml', () => {
|
||||||
|
expect(isPiecePath('custom.yaml')).toBe(true);
|
||||||
|
expect(isPiecePath('my-piece.yaml')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true for paths ending with .yml', () => {
|
||||||
|
expect(isPiecePath('custom.yml')).toBe(true);
|
||||||
|
expect(isPiecePath('my-piece.yml')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false for plain piece names', () => {
|
||||||
|
expect(isPiecePath('default')).toBe(false);
|
||||||
|
expect(isPiecePath('simple')).toBe(false);
|
||||||
|
expect(isPiecePath('magi')).toBe(false);
|
||||||
|
expect(isPiecePath('my-custom-piece')).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('loadPieceByIdentifier', () => {
|
||||||
|
let tempDir: string;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
tempDir = mkdtempSync(join(tmpdir(), 'takt-test-'));
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
rmSync(tempDir, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should load piece by name (builtin)', () => {
|
||||||
|
const piece = loadPieceByIdentifier('default', process.cwd());
|
||||||
|
expect(piece).not.toBeNull();
|
||||||
|
expect(piece!.name).toBe('default');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should load piece by absolute path', () => {
|
||||||
|
const filePath = join(tempDir, 'test.yaml');
|
||||||
|
writeFileSync(filePath, SAMPLE_PIECE);
|
||||||
|
|
||||||
|
const piece = loadPieceByIdentifier(filePath, tempDir);
|
||||||
|
expect(piece).not.toBeNull();
|
||||||
|
expect(piece!.name).toBe('test-piece');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should load piece by relative path', () => {
|
||||||
|
const filePath = join(tempDir, 'test.yaml');
|
||||||
|
writeFileSync(filePath, SAMPLE_PIECE);
|
||||||
|
|
||||||
|
const piece = loadPieceByIdentifier('./test.yaml', tempDir);
|
||||||
|
expect(piece).not.toBeNull();
|
||||||
|
expect(piece!.name).toBe('test-piece');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should load piece by filename with .yaml extension', () => {
|
||||||
|
const filePath = join(tempDir, 'test.yaml');
|
||||||
|
writeFileSync(filePath, SAMPLE_PIECE);
|
||||||
|
|
||||||
|
const piece = loadPieceByIdentifier('test.yaml', tempDir);
|
||||||
|
expect(piece).not.toBeNull();
|
||||||
|
expect(piece!.name).toBe('test-piece');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return null for non-existent name', () => {
|
||||||
|
const piece = loadPieceByIdentifier('non-existent-piece-xyz', process.cwd());
|
||||||
|
expect(piece).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return null for non-existent path', () => {
|
||||||
|
const piece = loadPieceByIdentifier('./non-existent.yaml', tempDir);
|
||||||
|
expect(piece).toBeNull();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('listPieces with project-local', () => {
|
||||||
|
let tempDir: string;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
tempDir = mkdtempSync(join(tmpdir(), 'takt-test-'));
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
rmSync(tempDir, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include project-local pieces when cwd is provided', () => {
|
||||||
|
const projectPiecesDir = join(tempDir, '.takt', 'pieces');
|
||||||
|
mkdirSync(projectPiecesDir, { recursive: true });
|
||||||
|
writeFileSync(join(projectPiecesDir, 'project-custom.yaml'), SAMPLE_PIECE);
|
||||||
|
|
||||||
|
const pieces = listPieces(tempDir);
|
||||||
|
expect(pieces).toContain('project-custom');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include builtin pieces regardless of cwd', () => {
|
||||||
|
const pieces = listPieces(tempDir);
|
||||||
|
expect(pieces).toContain('default');
|
||||||
|
});
|
||||||
|
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('loadAllPieces with project-local', () => {
|
||||||
|
let tempDir: string;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
tempDir = mkdtempSync(join(tmpdir(), 'takt-test-'));
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
rmSync(tempDir, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include project-local pieces when cwd is provided', () => {
|
||||||
|
const projectPiecesDir = join(tempDir, '.takt', 'pieces');
|
||||||
|
mkdirSync(projectPiecesDir, { recursive: true });
|
||||||
|
writeFileSync(join(projectPiecesDir, 'project-custom.yaml'), SAMPLE_PIECE);
|
||||||
|
|
||||||
|
const pieces = loadAllPieces(tempDir);
|
||||||
|
expect(pieces.has('project-custom')).toBe(true);
|
||||||
|
expect(pieces.get('project-custom')!.name).toBe('test-piece');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should have project-local override builtin when same name', () => {
|
||||||
|
const projectPiecesDir = join(tempDir, '.takt', 'pieces');
|
||||||
|
mkdirSync(projectPiecesDir, { recursive: true });
|
||||||
|
|
||||||
|
const overridePiece = `name: project-override
|
||||||
|
description: Project override
|
||||||
|
initial_movement: step1
|
||||||
|
max_iterations: 1
|
||||||
|
|
||||||
|
movements:
|
||||||
|
- name: step1
|
||||||
|
agent: coder
|
||||||
|
instruction: "{task}"
|
||||||
|
`;
|
||||||
|
writeFileSync(join(projectPiecesDir, 'default.yaml'), overridePiece);
|
||||||
|
|
||||||
|
const pieces = loadAllPieces(tempDir);
|
||||||
|
expect(pieces.get('default')!.name).toBe('project-override');
|
||||||
|
});
|
||||||
|
|
||||||
|
});
|
||||||
@ -76,7 +76,7 @@ describe('executePipeline', () => {
|
|||||||
mockLoadGlobalConfig.mockReturnValue({
|
mockLoadGlobalConfig.mockReturnValue({
|
||||||
language: 'en',
|
language: 'en',
|
||||||
trustedDirectories: [],
|
trustedDirectories: [],
|
||||||
defaultWorkflow: 'default',
|
defaultPiece: 'default',
|
||||||
logLevel: 'info',
|
logLevel: 'info',
|
||||||
provider: 'claude',
|
provider: 'claude',
|
||||||
});
|
});
|
||||||
@ -84,7 +84,7 @@ describe('executePipeline', () => {
|
|||||||
|
|
||||||
it('should return exit code 2 when neither --issue nor --task is specified', async () => {
|
it('should return exit code 2 when neither --issue nor --task is specified', async () => {
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
workflow: 'default',
|
piece: 'default',
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
});
|
});
|
||||||
@ -97,7 +97,7 @@ describe('executePipeline', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
issueNumber: 99,
|
issueNumber: 99,
|
||||||
workflow: 'default',
|
piece: 'default',
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
});
|
});
|
||||||
@ -112,7 +112,7 @@ describe('executePipeline', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
issueNumber: 999,
|
issueNumber: 999,
|
||||||
workflow: 'default',
|
piece: 'default',
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
});
|
});
|
||||||
@ -120,7 +120,7 @@ describe('executePipeline', () => {
|
|||||||
expect(exitCode).toBe(2);
|
expect(exitCode).toBe(2);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return exit code 3 when workflow fails', async () => {
|
it('should return exit code 3 when piece fails', async () => {
|
||||||
mockFetchIssue.mockReturnValueOnce({
|
mockFetchIssue.mockReturnValueOnce({
|
||||||
number: 99,
|
number: 99,
|
||||||
title: 'Test issue',
|
title: 'Test issue',
|
||||||
@ -132,7 +132,7 @@ describe('executePipeline', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
issueNumber: 99,
|
issueNumber: 99,
|
||||||
workflow: 'default',
|
piece: 'default',
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
});
|
});
|
||||||
@ -145,7 +145,7 @@ describe('executePipeline', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Fix the bug',
|
task: 'Fix the bug',
|
||||||
workflow: 'default',
|
piece: 'default',
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
});
|
});
|
||||||
@ -154,7 +154,7 @@ describe('executePipeline', () => {
|
|||||||
expect(mockExecuteTask).toHaveBeenCalledWith({
|
expect(mockExecuteTask).toHaveBeenCalledWith({
|
||||||
task: 'Fix the bug',
|
task: 'Fix the bug',
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
workflowIdentifier: 'default',
|
pieceIdentifier: 'default',
|
||||||
projectCwd: '/tmp/test',
|
projectCwd: '/tmp/test',
|
||||||
agentOverrides: undefined,
|
agentOverrides: undefined,
|
||||||
});
|
});
|
||||||
@ -165,7 +165,7 @@ describe('executePipeline', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Fix the bug',
|
task: 'Fix the bug',
|
||||||
workflow: 'default',
|
piece: 'default',
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
provider: 'codex',
|
provider: 'codex',
|
||||||
@ -176,7 +176,7 @@ describe('executePipeline', () => {
|
|||||||
expect(mockExecuteTask).toHaveBeenCalledWith({
|
expect(mockExecuteTask).toHaveBeenCalledWith({
|
||||||
task: 'Fix the bug',
|
task: 'Fix the bug',
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
workflowIdentifier: 'default',
|
pieceIdentifier: 'default',
|
||||||
projectCwd: '/tmp/test',
|
projectCwd: '/tmp/test',
|
||||||
agentOverrides: { provider: 'codex', model: 'codex-model' },
|
agentOverrides: { provider: 'codex', model: 'codex-model' },
|
||||||
});
|
});
|
||||||
@ -188,7 +188,7 @@ describe('executePipeline', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Fix the bug',
|
task: 'Fix the bug',
|
||||||
workflow: 'default',
|
piece: 'default',
|
||||||
autoPr: true,
|
autoPr: true,
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
});
|
});
|
||||||
@ -202,7 +202,7 @@ describe('executePipeline', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Fix the bug',
|
task: 'Fix the bug',
|
||||||
workflow: 'default',
|
piece: 'default',
|
||||||
branch: 'fix/my-branch',
|
branch: 'fix/my-branch',
|
||||||
autoPr: true,
|
autoPr: true,
|
||||||
repo: 'owner/repo',
|
repo: 'owner/repo',
|
||||||
@ -224,7 +224,7 @@ describe('executePipeline', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'From --task flag',
|
task: 'From --task flag',
|
||||||
workflow: 'magi',
|
piece: 'magi',
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
});
|
});
|
||||||
@ -233,7 +233,7 @@ describe('executePipeline', () => {
|
|||||||
expect(mockExecuteTask).toHaveBeenCalledWith({
|
expect(mockExecuteTask).toHaveBeenCalledWith({
|
||||||
task: 'From --task flag',
|
task: 'From --task flag',
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
workflowIdentifier: 'magi',
|
pieceIdentifier: 'magi',
|
||||||
projectCwd: '/tmp/test',
|
projectCwd: '/tmp/test',
|
||||||
agentOverrides: undefined,
|
agentOverrides: undefined,
|
||||||
});
|
});
|
||||||
@ -244,7 +244,7 @@ describe('executePipeline', () => {
|
|||||||
mockLoadGlobalConfig.mockReturnValue({
|
mockLoadGlobalConfig.mockReturnValue({
|
||||||
language: 'en',
|
language: 'en',
|
||||||
trustedDirectories: [],
|
trustedDirectories: [],
|
||||||
defaultWorkflow: 'default',
|
defaultPiece: 'default',
|
||||||
logLevel: 'info',
|
logLevel: 'info',
|
||||||
provider: 'claude',
|
provider: 'claude',
|
||||||
pipeline: {
|
pipeline: {
|
||||||
@ -263,7 +263,7 @@ describe('executePipeline', () => {
|
|||||||
|
|
||||||
await executePipeline({
|
await executePipeline({
|
||||||
issueNumber: 42,
|
issueNumber: 42,
|
||||||
workflow: 'default',
|
piece: 'default',
|
||||||
branch: 'test-branch',
|
branch: 'test-branch',
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
@ -281,7 +281,7 @@ describe('executePipeline', () => {
|
|||||||
mockLoadGlobalConfig.mockReturnValue({
|
mockLoadGlobalConfig.mockReturnValue({
|
||||||
language: 'en',
|
language: 'en',
|
||||||
trustedDirectories: [],
|
trustedDirectories: [],
|
||||||
defaultWorkflow: 'default',
|
defaultPiece: 'default',
|
||||||
logLevel: 'info',
|
logLevel: 'info',
|
||||||
provider: 'claude',
|
provider: 'claude',
|
||||||
pipeline: {
|
pipeline: {
|
||||||
@ -300,7 +300,7 @@ describe('executePipeline', () => {
|
|||||||
|
|
||||||
await executePipeline({
|
await executePipeline({
|
||||||
issueNumber: 10,
|
issueNumber: 10,
|
||||||
workflow: 'default',
|
piece: 'default',
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
});
|
});
|
||||||
@ -318,7 +318,7 @@ describe('executePipeline', () => {
|
|||||||
mockLoadGlobalConfig.mockReturnValue({
|
mockLoadGlobalConfig.mockReturnValue({
|
||||||
language: 'en',
|
language: 'en',
|
||||||
trustedDirectories: [],
|
trustedDirectories: [],
|
||||||
defaultWorkflow: 'default',
|
defaultPiece: 'default',
|
||||||
logLevel: 'info',
|
logLevel: 'info',
|
||||||
provider: 'claude',
|
provider: 'claude',
|
||||||
pipeline: {
|
pipeline: {
|
||||||
@ -338,7 +338,7 @@ describe('executePipeline', () => {
|
|||||||
|
|
||||||
await executePipeline({
|
await executePipeline({
|
||||||
issueNumber: 50,
|
issueNumber: 50,
|
||||||
workflow: 'default',
|
piece: 'default',
|
||||||
branch: 'fix-auth',
|
branch: 'fix-auth',
|
||||||
autoPr: true,
|
autoPr: true,
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
@ -360,7 +360,7 @@ describe('executePipeline', () => {
|
|||||||
|
|
||||||
await executePipeline({
|
await executePipeline({
|
||||||
task: 'Fix bug',
|
task: 'Fix bug',
|
||||||
workflow: 'default',
|
piece: 'default',
|
||||||
branch: 'fix-branch',
|
branch: 'fix-branch',
|
||||||
autoPr: true,
|
autoPr: true,
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
@ -383,7 +383,7 @@ describe('executePipeline', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Fix the bug',
|
task: 'Fix the bug',
|
||||||
workflow: 'default',
|
piece: 'default',
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
@ -393,7 +393,7 @@ describe('executePipeline', () => {
|
|||||||
expect(mockExecuteTask).toHaveBeenCalledWith({
|
expect(mockExecuteTask).toHaveBeenCalledWith({
|
||||||
task: 'Fix the bug',
|
task: 'Fix the bug',
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
workflowIdentifier: 'default',
|
pieceIdentifier: 'default',
|
||||||
projectCwd: '/tmp/test',
|
projectCwd: '/tmp/test',
|
||||||
agentOverrides: undefined,
|
agentOverrides: undefined,
|
||||||
});
|
});
|
||||||
@ -411,7 +411,7 @@ describe('executePipeline', () => {
|
|||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Fix the bug',
|
task: 'Fix the bug',
|
||||||
workflow: 'default',
|
piece: 'default',
|
||||||
autoPr: true,
|
autoPr: true,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
@ -421,12 +421,12 @@ describe('executePipeline', () => {
|
|||||||
expect(mockCreatePullRequest).not.toHaveBeenCalled();
|
expect(mockCreatePullRequest).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should still return workflow failure exit code when skipGit is true', async () => {
|
it('should still return piece failure exit code when skipGit is true', async () => {
|
||||||
mockExecuteTask.mockResolvedValueOnce(false);
|
mockExecuteTask.mockResolvedValueOnce(false);
|
||||||
|
|
||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
task: 'Fix the bug',
|
task: 'Fix the bug',
|
||||||
workflow: 'default',
|
piece: 'default',
|
||||||
autoPr: false,
|
autoPr: false,
|
||||||
skipGit: true,
|
skipGit: true,
|
||||||
cwd: '/tmp/test',
|
cwd: '/tmp/test',
|
||||||
|
|||||||
@ -39,7 +39,7 @@ describe('variable substitution', () => {
|
|||||||
it('replaces {{variableName}} placeholders with provided values', () => {
|
it('replaces {{variableName}} placeholders with provided values', () => {
|
||||||
const result = loadTemplate('perform_builtin_agent_system_prompt', 'en', { agentName: 'test-agent' });
|
const result = loadTemplate('perform_builtin_agent_system_prompt', 'en', { agentName: 'test-agent' });
|
||||||
expect(result).toContain('You are the test-agent agent');
|
expect(result).toContain('You are the test-agent agent');
|
||||||
expect(result).toContain('Follow the standard test-agent workflow');
|
expect(result).toContain('Follow the standard test-agent piece');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('replaces undefined variables with empty string', () => {
|
it('replaces undefined variables with empty string', () => {
|
||||||
@ -57,13 +57,13 @@ describe('variable substitution', () => {
|
|||||||
expect(result).toContain('| 1 | Success |');
|
expect(result).toContain('| 1 | Success |');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('replaces workflow info variables in interactive prompt', () => {
|
it('replaces piece info variables in interactive prompt', () => {
|
||||||
const result = loadTemplate('score_interactive_system_prompt', 'en', {
|
const result = loadTemplate('score_interactive_system_prompt', 'en', {
|
||||||
workflowInfo: true,
|
pieceInfo: true,
|
||||||
workflowName: 'my-workflow',
|
pieceName: 'my-piece',
|
||||||
workflowDescription: 'Test description',
|
pieceDescription: 'Test description',
|
||||||
});
|
});
|
||||||
expect(result).toContain('"my-workflow"');
|
expect(result).toContain('"my-piece"');
|
||||||
expect(result).toContain('Test description');
|
expect(result).toContain('Test description');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
@ -189,11 +189,11 @@ describe('template content integrity', () => {
|
|||||||
expect(en).toContain('## Execution Rules');
|
expect(en).toContain('## Execution Rules');
|
||||||
expect(en).toContain('Do NOT run git commit');
|
expect(en).toContain('Do NOT run git commit');
|
||||||
expect(en).toContain('Do NOT use `cd`');
|
expect(en).toContain('Do NOT use `cd`');
|
||||||
expect(en).toContain('## Workflow Context');
|
expect(en).toContain('## Piece Context');
|
||||||
expect(en).toContain('## Instructions');
|
expect(en).toContain('## Instructions');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('perform_phase1_message contains workflow context variables', () => {
|
it('perform_phase1_message contains piece context variables', () => {
|
||||||
const en = loadTemplate('perform_phase1_message', 'en');
|
const en = loadTemplate('perform_phase1_message', 'en');
|
||||||
expect(en).toContain('{{iteration}}');
|
expect(en).toContain('{{iteration}}');
|
||||||
expect(en).toContain('{{movement}}');
|
expect(en).toContain('{{movement}}');
|
||||||
|
|||||||
@ -1,9 +1,9 @@
|
|||||||
/**
|
/**
|
||||||
* Tests for review-only workflow
|
* Tests for review-only piece
|
||||||
*
|
*
|
||||||
* Covers:
|
* Covers:
|
||||||
* - Workflow YAML files (EN/JA) load and pass schema validation
|
* - Piece YAML files (EN/JA) load and pass schema validation
|
||||||
* - Workflow structure: plan -> reviewers (parallel) -> supervise -> pr-comment
|
* - Piece structure: plan -> reviewers (parallel) -> supervise -> pr-comment
|
||||||
* - All movements have edit: false
|
* - All movements have edit: false
|
||||||
* - pr-commenter agent has Bash in allowed_tools
|
* - pr-commenter agent has Bash in allowed_tools
|
||||||
* - Routing rules for local vs PR comment flows
|
* - Routing rules for local vs PR comment flows
|
||||||
@ -13,7 +13,7 @@ import { describe, it, expect } from 'vitest';
|
|||||||
import { readFileSync } from 'node:fs';
|
import { readFileSync } from 'node:fs';
|
||||||
import { join } from 'node:path';
|
import { join } from 'node:path';
|
||||||
import { parse as parseYaml } from 'yaml';
|
import { parse as parseYaml } from 'yaml';
|
||||||
import { WorkflowConfigRawSchema } from '../core/models/index.js';
|
import { PieceConfigRawSchema } from '../core/models/index.js';
|
||||||
|
|
||||||
const RESOURCES_DIR = join(import.meta.dirname, '../../resources/global');
|
const RESOURCES_DIR = join(import.meta.dirname, '../../resources/global');
|
||||||
|
|
||||||
@ -23,11 +23,11 @@ function loadReviewOnlyYaml(lang: 'en' | 'ja') {
|
|||||||
return parseYaml(content);
|
return parseYaml(content);
|
||||||
}
|
}
|
||||||
|
|
||||||
describe('review-only workflow (EN)', () => {
|
describe('review-only piece (EN)', () => {
|
||||||
const raw = loadReviewOnlyYaml('en');
|
const raw = loadReviewOnlyYaml('en');
|
||||||
|
|
||||||
it('should pass schema validation', () => {
|
it('should pass schema validation', () => {
|
||||||
const result = WorkflowConfigRawSchema.safeParse(raw);
|
const result = PieceConfigRawSchema.safeParse(raw);
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -137,11 +137,11 @@ describe('review-only workflow (EN)', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('review-only workflow (JA)', () => {
|
describe('review-only piece (JA)', () => {
|
||||||
const raw = loadReviewOnlyYaml('ja');
|
const raw = loadReviewOnlyYaml('ja');
|
||||||
|
|
||||||
it('should pass schema validation', () => {
|
it('should pass schema validation', () => {
|
||||||
const result = WorkflowConfigRawSchema.safeParse(raw);
|
const result = PieceConfigRawSchema.safeParse(raw);
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -203,21 +203,21 @@ describe('pr-commenter agent files', () => {
|
|||||||
expect(content).toContain('gh pr comment');
|
expect(content).toContain('gh pr comment');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should NOT contain workflow-specific report names (EN)', () => {
|
it('should NOT contain piece-specific report names (EN)', () => {
|
||||||
const filePath = join(RESOURCES_DIR, 'en', 'agents', 'review', 'pr-commenter.md');
|
const filePath = join(RESOURCES_DIR, 'en', 'agents', 'review', 'pr-commenter.md');
|
||||||
const content = readFileSync(filePath, 'utf-8');
|
const content = readFileSync(filePath, 'utf-8');
|
||||||
// Agent should not reference specific review-only workflow report files
|
// Agent should not reference specific review-only piece report files
|
||||||
expect(content).not.toContain('01-architect-review.md');
|
expect(content).not.toContain('01-architect-review.md');
|
||||||
expect(content).not.toContain('02-security-review.md');
|
expect(content).not.toContain('02-security-review.md');
|
||||||
expect(content).not.toContain('03-ai-review.md');
|
expect(content).not.toContain('03-ai-review.md');
|
||||||
expect(content).not.toContain('04-review-summary.md');
|
expect(content).not.toContain('04-review-summary.md');
|
||||||
// Agent should not reference specific reviewer names from review-only workflow
|
// Agent should not reference specific reviewer names from review-only piece
|
||||||
expect(content).not.toContain('Architecture review report');
|
expect(content).not.toContain('Architecture review report');
|
||||||
expect(content).not.toContain('Security review report');
|
expect(content).not.toContain('Security review report');
|
||||||
expect(content).not.toContain('AI antipattern review report');
|
expect(content).not.toContain('AI antipattern review report');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should NOT contain workflow-specific report names (JA)', () => {
|
it('should NOT contain piece-specific report names (JA)', () => {
|
||||||
const filePath = join(RESOURCES_DIR, 'ja', 'agents', 'review', 'pr-commenter.md');
|
const filePath = join(RESOURCES_DIR, 'ja', 'agents', 'review', 'pr-commenter.md');
|
||||||
const content = readFileSync(filePath, 'utf-8');
|
const content = readFileSync(filePath, 'utf-8');
|
||||||
expect(content).not.toContain('01-architect-review.md');
|
expect(content).not.toContain('01-architect-review.md');
|
||||||
@ -227,7 +227,7 @@ describe('pr-commenter agent files', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('pr-comment instruction_template contains workflow-specific procedures', () => {
|
describe('pr-comment instruction_template contains piece-specific procedures', () => {
|
||||||
it('EN: should reference specific report files', () => {
|
it('EN: should reference specific report files', () => {
|
||||||
const raw = loadReviewOnlyYaml('en');
|
const raw = loadReviewOnlyYaml('en');
|
||||||
const prComment = raw.movements.find((s: { name: string }) => s.name === 'pr-comment');
|
const prComment = raw.movements.find((s: { name: string }) => s.name === 'pr-comment');
|
||||||
@ -17,8 +17,8 @@ import {
|
|||||||
type SessionLog,
|
type SessionLog,
|
||||||
type NdjsonRecord,
|
type NdjsonRecord,
|
||||||
type NdjsonStepComplete,
|
type NdjsonStepComplete,
|
||||||
type NdjsonWorkflowComplete,
|
type NdjsonPieceComplete,
|
||||||
type NdjsonWorkflowAbort,
|
type NdjsonPieceAbort,
|
||||||
type NdjsonPhaseStart,
|
type NdjsonPhaseStart,
|
||||||
type NdjsonPhaseComplete,
|
type NdjsonPhaseComplete,
|
||||||
type NdjsonInteractiveStart,
|
type NdjsonInteractiveStart,
|
||||||
@ -56,7 +56,7 @@ describe('updateLatestPointer', () => {
|
|||||||
expect(pointer.sessionId).toBe('abc-123');
|
expect(pointer.sessionId).toBe('abc-123');
|
||||||
expect(pointer.logFile).toBe('abc-123.jsonl');
|
expect(pointer.logFile).toBe('abc-123.jsonl');
|
||||||
expect(pointer.task).toBe('my task');
|
expect(pointer.task).toBe('my task');
|
||||||
expect(pointer.workflowName).toBe('default');
|
expect(pointer.pieceName).toBe('default');
|
||||||
expect(pointer.status).toBe('running');
|
expect(pointer.status).toBe('running');
|
||||||
expect(pointer.iterations).toBe(0);
|
expect(pointer.iterations).toBe(0);
|
||||||
expect(pointer.startTime).toBeDefined();
|
expect(pointer.startTime).toBeDefined();
|
||||||
@ -84,7 +84,7 @@ describe('updateLatestPointer', () => {
|
|||||||
const log1 = createSessionLog('first task', projectDir, 'wf1');
|
const log1 = createSessionLog('first task', projectDir, 'wf1');
|
||||||
updateLatestPointer(log1, 'sid-first', projectDir);
|
updateLatestPointer(log1, 'sid-first', projectDir);
|
||||||
|
|
||||||
// Simulate a second workflow starting
|
// Simulate a second piece starting
|
||||||
const log2 = createSessionLog('second task', projectDir, 'wf2');
|
const log2 = createSessionLog('second task', projectDir, 'wf2');
|
||||||
updateLatestPointer(log2, 'sid-second', projectDir, { copyToPrevious: true });
|
updateLatestPointer(log2, 'sid-second', projectDir, { copyToPrevious: true });
|
||||||
|
|
||||||
@ -102,11 +102,11 @@ describe('updateLatestPointer', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should not update previous.json on step-complete calls (no copyToPrevious)', () => {
|
it('should not update previous.json on step-complete calls (no copyToPrevious)', () => {
|
||||||
// Workflow 1 creates latest
|
// Piece 1 creates latest
|
||||||
const log1 = createSessionLog('first', projectDir, 'wf');
|
const log1 = createSessionLog('first', projectDir, 'wf');
|
||||||
updateLatestPointer(log1, 'sid-1', projectDir);
|
updateLatestPointer(log1, 'sid-1', projectDir);
|
||||||
|
|
||||||
// Workflow 2 starts → copies latest to previous
|
// Piece 2 starts → copies latest to previous
|
||||||
const log2 = createSessionLog('second', projectDir, 'wf');
|
const log2 = createSessionLog('second', projectDir, 'wf');
|
||||||
updateLatestPointer(log2, 'sid-2', projectDir, { copyToPrevious: true });
|
updateLatestPointer(log2, 'sid-2', projectDir, { copyToPrevious: true });
|
||||||
|
|
||||||
@ -129,7 +129,7 @@ describe('updateLatestPointer', () => {
|
|||||||
log.iterations = 2;
|
log.iterations = 2;
|
||||||
updateLatestPointer(log, 'sid-1', projectDir);
|
updateLatestPointer(log, 'sid-1', projectDir);
|
||||||
|
|
||||||
// Simulate workflow completion
|
// Simulate piece completion
|
||||||
log.status = 'completed';
|
log.status = 'completed';
|
||||||
log.iterations = 3;
|
log.iterations = 3;
|
||||||
updateLatestPointer(log, 'sid-1', projectDir);
|
updateLatestPointer(log, 'sid-1', projectDir);
|
||||||
@ -153,7 +153,7 @@ describe('NDJSON log', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
describe('initNdjsonLog', () => {
|
describe('initNdjsonLog', () => {
|
||||||
it('should create a .jsonl file with workflow_start record', () => {
|
it('should create a .jsonl file with piece_start record', () => {
|
||||||
const filepath = initNdjsonLog('sess-001', 'my task', 'default', projectDir);
|
const filepath = initNdjsonLog('sess-001', 'my task', 'default', projectDir);
|
||||||
|
|
||||||
expect(filepath).toContain('sess-001.jsonl');
|
expect(filepath).toContain('sess-001.jsonl');
|
||||||
@ -164,10 +164,10 @@ describe('NDJSON log', () => {
|
|||||||
expect(lines).toHaveLength(1);
|
expect(lines).toHaveLength(1);
|
||||||
|
|
||||||
const record = JSON.parse(lines[0]!) as NdjsonRecord;
|
const record = JSON.parse(lines[0]!) as NdjsonRecord;
|
||||||
expect(record.type).toBe('workflow_start');
|
expect(record.type).toBe('piece_start');
|
||||||
if (record.type === 'workflow_start') {
|
if (record.type === 'piece_start') {
|
||||||
expect(record.task).toBe('my task');
|
expect(record.task).toBe('my task');
|
||||||
expect(record.workflowName).toBe('default');
|
expect(record.pieceName).toBe('default');
|
||||||
expect(record.startTime).toBeDefined();
|
expect(record.startTime).toBeDefined();
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
@ -199,10 +199,10 @@ describe('NDJSON log', () => {
|
|||||||
|
|
||||||
const content = readFileSync(filepath, 'utf-8');
|
const content = readFileSync(filepath, 'utf-8');
|
||||||
const lines = content.trim().split('\n');
|
const lines = content.trim().split('\n');
|
||||||
expect(lines).toHaveLength(3); // workflow_start + step_start + step_complete
|
expect(lines).toHaveLength(3); // piece_start + step_start + step_complete
|
||||||
|
|
||||||
const parsed0 = JSON.parse(lines[0]!) as NdjsonRecord;
|
const parsed0 = JSON.parse(lines[0]!) as NdjsonRecord;
|
||||||
expect(parsed0.type).toBe('workflow_start');
|
expect(parsed0.type).toBe('piece_start');
|
||||||
|
|
||||||
const parsed1 = JSON.parse(lines[1]!) as NdjsonRecord;
|
const parsed1 = JSON.parse(lines[1]!) as NdjsonRecord;
|
||||||
expect(parsed1.type).toBe('step_start');
|
expect(parsed1.type).toBe('step_start');
|
||||||
@ -247,8 +247,8 @@ describe('NDJSON log', () => {
|
|||||||
};
|
};
|
||||||
appendNdjsonLine(filepath, stepComplete);
|
appendNdjsonLine(filepath, stepComplete);
|
||||||
|
|
||||||
const complete: NdjsonWorkflowComplete = {
|
const complete: NdjsonPieceComplete = {
|
||||||
type: 'workflow_complete',
|
type: 'piece_complete',
|
||||||
iterations: 1,
|
iterations: 1,
|
||||||
endTime: '2025-01-01T00:00:03.000Z',
|
endTime: '2025-01-01T00:00:03.000Z',
|
||||||
};
|
};
|
||||||
@ -257,7 +257,7 @@ describe('NDJSON log', () => {
|
|||||||
const log = loadNdjsonLog(filepath);
|
const log = loadNdjsonLog(filepath);
|
||||||
expect(log).not.toBeNull();
|
expect(log).not.toBeNull();
|
||||||
expect(log!.task).toBe('build app');
|
expect(log!.task).toBe('build app');
|
||||||
expect(log!.workflowName).toBe('default');
|
expect(log!.pieceName).toBe('default');
|
||||||
expect(log!.status).toBe('completed');
|
expect(log!.status).toBe('completed');
|
||||||
expect(log!.iterations).toBe(1);
|
expect(log!.iterations).toBe(1);
|
||||||
expect(log!.endTime).toBe('2025-01-01T00:00:03.000Z');
|
expect(log!.endTime).toBe('2025-01-01T00:00:03.000Z');
|
||||||
@ -268,7 +268,7 @@ describe('NDJSON log', () => {
|
|||||||
expect(log!.history[0]!.matchedRuleMethod).toBe('phase3_tag');
|
expect(log!.history[0]!.matchedRuleMethod).toBe('phase3_tag');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should handle aborted workflow', () => {
|
it('should handle aborted piece', () => {
|
||||||
const filepath = initNdjsonLog('sess-004', 'failing task', 'wf', projectDir);
|
const filepath = initNdjsonLog('sess-004', 'failing task', 'wf', projectDir);
|
||||||
|
|
||||||
appendNdjsonLine(filepath, {
|
appendNdjsonLine(filepath, {
|
||||||
@ -290,8 +290,8 @@ describe('NDJSON log', () => {
|
|||||||
timestamp: '2025-01-01T00:00:02.000Z',
|
timestamp: '2025-01-01T00:00:02.000Z',
|
||||||
} satisfies NdjsonStepComplete);
|
} satisfies NdjsonStepComplete);
|
||||||
|
|
||||||
const abort: NdjsonWorkflowAbort = {
|
const abort: NdjsonPieceAbort = {
|
||||||
type: 'workflow_abort',
|
type: 'piece_abort',
|
||||||
iterations: 1,
|
iterations: 1,
|
||||||
reason: 'Max iterations reached',
|
reason: 'Max iterations reached',
|
||||||
endTime: '2025-01-01T00:00:03.000Z',
|
endTime: '2025-01-01T00:00:03.000Z',
|
||||||
@ -342,7 +342,7 @@ describe('NDJSON log', () => {
|
|||||||
} satisfies NdjsonStepComplete);
|
} satisfies NdjsonStepComplete);
|
||||||
|
|
||||||
appendNdjsonLine(filepath, {
|
appendNdjsonLine(filepath, {
|
||||||
type: 'workflow_complete',
|
type: 'piece_complete',
|
||||||
iterations: 1,
|
iterations: 1,
|
||||||
endTime: '2025-01-01T00:00:03.000Z',
|
endTime: '2025-01-01T00:00:03.000Z',
|
||||||
});
|
});
|
||||||
@ -370,7 +370,7 @@ describe('NDJSON log', () => {
|
|||||||
} satisfies NdjsonStepComplete);
|
} satisfies NdjsonStepComplete);
|
||||||
|
|
||||||
appendNdjsonLine(filepath, {
|
appendNdjsonLine(filepath, {
|
||||||
type: 'workflow_complete',
|
type: 'piece_complete',
|
||||||
iterations: 1,
|
iterations: 1,
|
||||||
endTime: '2025-01-01T00:00:03.000Z',
|
endTime: '2025-01-01T00:00:03.000Z',
|
||||||
});
|
});
|
||||||
@ -389,7 +389,7 @@ describe('NDJSON log', () => {
|
|||||||
const legacyLog: SessionLog = {
|
const legacyLog: SessionLog = {
|
||||||
task: 'legacy task',
|
task: 'legacy task',
|
||||||
projectDir,
|
projectDir,
|
||||||
workflowName: 'wf',
|
pieceName: 'wf',
|
||||||
iterations: 0,
|
iterations: 0,
|
||||||
startTime: new Date().toISOString(),
|
startTime: new Date().toISOString(),
|
||||||
status: 'running',
|
status: 'running',
|
||||||
@ -422,8 +422,8 @@ describe('NDJSON log', () => {
|
|||||||
|
|
||||||
const after2 = readFileSync(filepath, 'utf-8').trim().split('\n');
|
const after2 = readFileSync(filepath, 'utf-8').trim().split('\n');
|
||||||
expect(after2).toHaveLength(2);
|
expect(after2).toHaveLength(2);
|
||||||
// First line should still be workflow_start
|
// First line should still be piece_start
|
||||||
expect(JSON.parse(after2[0]!).type).toBe('workflow_start');
|
expect(JSON.parse(after2[0]!).type).toBe('piece_start');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should produce valid JSON on each line', () => {
|
it('should produce valid JSON on each line', () => {
|
||||||
@ -466,7 +466,7 @@ describe('NDJSON log', () => {
|
|||||||
|
|
||||||
const content = readFileSync(filepath, 'utf-8');
|
const content = readFileSync(filepath, 'utf-8');
|
||||||
const lines = content.trim().split('\n');
|
const lines = content.trim().split('\n');
|
||||||
expect(lines).toHaveLength(2); // workflow_start + phase_start
|
expect(lines).toHaveLength(2); // piece_start + phase_start
|
||||||
|
|
||||||
const parsed = JSON.parse(lines[1]!) as NdjsonRecord;
|
const parsed = JSON.parse(lines[1]!) as NdjsonRecord;
|
||||||
expect(parsed.type).toBe('phase_start');
|
expect(parsed.type).toBe('phase_start');
|
||||||
|
|||||||
@ -10,7 +10,7 @@ vi.mock('../infra/providers/index.js', () => ({
|
|||||||
|
|
||||||
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
||||||
loadGlobalConfig: vi.fn(),
|
loadGlobalConfig: vi.fn(),
|
||||||
getBuiltinWorkflowsEnabled: vi.fn().mockReturnValue(true),
|
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
||||||
@ -41,7 +41,7 @@ beforeEach(() => {
|
|||||||
mockLoadGlobalConfig.mockReturnValue({
|
mockLoadGlobalConfig.mockReturnValue({
|
||||||
language: 'ja',
|
language: 'ja',
|
||||||
trustedDirectories: [],
|
trustedDirectories: [],
|
||||||
defaultWorkflow: 'default',
|
defaultPiece: 'default',
|
||||||
logLevel: 'info',
|
logLevel: 'info',
|
||||||
provider: 'claude',
|
provider: 'claude',
|
||||||
model: 'haiku',
|
model: 'haiku',
|
||||||
@ -168,7 +168,7 @@ describe('summarizeTaskName', () => {
|
|||||||
mockLoadGlobalConfig.mockReturnValue({
|
mockLoadGlobalConfig.mockReturnValue({
|
||||||
language: 'ja',
|
language: 'ja',
|
||||||
trustedDirectories: [],
|
trustedDirectories: [],
|
||||||
defaultWorkflow: 'default',
|
defaultPiece: 'default',
|
||||||
logLevel: 'info',
|
logLevel: 'info',
|
||||||
provider: 'codex',
|
provider: 'codex',
|
||||||
model: 'gpt-4',
|
model: 'gpt-4',
|
||||||
|
|||||||
@ -6,8 +6,8 @@ import { describe, it, expect, vi, beforeEach } from 'vitest';
|
|||||||
|
|
||||||
// Mock dependencies before importing the module under test
|
// Mock dependencies before importing the module under test
|
||||||
vi.mock('../infra/config/index.js', () => ({
|
vi.mock('../infra/config/index.js', () => ({
|
||||||
loadWorkflowByIdentifier: vi.fn(),
|
loadPieceByIdentifier: vi.fn(),
|
||||||
isWorkflowPath: vi.fn(() => false),
|
isPiecePath: vi.fn(() => false),
|
||||||
loadGlobalConfig: vi.fn(() => ({})),
|
loadGlobalConfig: vi.fn(() => ({})),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
@ -51,8 +51,8 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
|||||||
getErrorMessage: vi.fn((e) => e.message),
|
getErrorMessage: vi.fn((e) => e.message),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../features/tasks/execute/workflowExecution.js', () => ({
|
vi.mock('../features/tasks/execute/pieceExecution.js', () => ({
|
||||||
executeWorkflow: vi.fn(),
|
executePiece: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../shared/context.js', () => ({
|
vi.mock('../shared/context.js', () => ({
|
||||||
@ -60,7 +60,7 @@ vi.mock('../shared/context.js', () => ({
|
|||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../shared/constants.js', () => ({
|
vi.mock('../shared/constants.js', () => ({
|
||||||
DEFAULT_WORKFLOW_NAME: 'default',
|
DEFAULT_PIECE_NAME: 'default',
|
||||||
DEFAULT_LANGUAGE: 'en',
|
DEFAULT_LANGUAGE: 'en',
|
||||||
}));
|
}));
|
||||||
|
|
||||||
@ -93,7 +93,7 @@ describe('resolveTaskExecution', () => {
|
|||||||
// Then
|
// Then
|
||||||
expect(result).toEqual({
|
expect(result).toEqual({
|
||||||
execCwd: '/project',
|
execCwd: '/project',
|
||||||
execWorkflow: 'default',
|
execPiece: 'default',
|
||||||
isWorktree: false,
|
isWorktree: false,
|
||||||
});
|
});
|
||||||
expect(mockSummarizeTaskName).not.toHaveBeenCalled();
|
expect(mockSummarizeTaskName).not.toHaveBeenCalled();
|
||||||
@ -149,7 +149,7 @@ describe('resolveTaskExecution', () => {
|
|||||||
});
|
});
|
||||||
expect(result).toEqual({
|
expect(result).toEqual({
|
||||||
execCwd: '/project/../20260128T0504-add-auth',
|
execCwd: '/project/../20260128T0504-add-auth',
|
||||||
execWorkflow: 'default',
|
execPiece: 'default',
|
||||||
isWorktree: true,
|
isWorktree: true,
|
||||||
branch: 'takt/20260128T0504-add-auth',
|
branch: 'takt/20260128T0504-add-auth',
|
||||||
});
|
});
|
||||||
@ -205,15 +205,15 @@ describe('resolveTaskExecution', () => {
|
|||||||
expect(mockSummarizeTaskName).toHaveBeenCalledWith('New feature implementation details', { cwd: '/project' });
|
expect(mockSummarizeTaskName).toHaveBeenCalledWith('New feature implementation details', { cwd: '/project' });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should use workflow override from task data', async () => {
|
it('should use piece override from task data', async () => {
|
||||||
// Given: Task with workflow override
|
// Given: Task with piece override
|
||||||
const task: TaskInfo = {
|
const task: TaskInfo = {
|
||||||
name: 'task-with-workflow',
|
name: 'task-with-piece',
|
||||||
content: 'Task content',
|
content: 'Task content',
|
||||||
filePath: '/tasks/task.yaml',
|
filePath: '/tasks/task.yaml',
|
||||||
data: {
|
data: {
|
||||||
task: 'Task content',
|
task: 'Task content',
|
||||||
workflow: 'custom-workflow',
|
piece: 'custom-piece',
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -221,7 +221,7 @@ describe('resolveTaskExecution', () => {
|
|||||||
const result = await resolveTaskExecution(task, '/project', 'default');
|
const result = await resolveTaskExecution(task, '/project', 'default');
|
||||||
|
|
||||||
// Then
|
// Then
|
||||||
expect(result.execWorkflow).toBe('custom-workflow');
|
expect(result.execPiece).toBe('custom-piece');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should pass branch option to createSharedClone when specified', async () => {
|
it('should pass branch option to createSharedClone when specified', async () => {
|
||||||
|
|||||||
@ -1,12 +1,12 @@
|
|||||||
/**
|
/**
|
||||||
* Tests for workflow transitions module (movement-based)
|
* Tests for piece transitions module (movement-based)
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect } from 'vitest';
|
import { describe, it, expect } from 'vitest';
|
||||||
import { determineNextMovementByRules } from '../core/workflow/index.js';
|
import { determineNextMovementByRules } from '../core/piece/index.js';
|
||||||
import type { WorkflowMovement } from '../core/models/index.js';
|
import type { PieceMovement } from '../core/models/index.js';
|
||||||
|
|
||||||
function createMovementWithRules(rules: { condition: string; next: string }[]): WorkflowMovement {
|
function createMovementWithRules(rules: { condition: string; next: string }[]): PieceMovement {
|
||||||
return {
|
return {
|
||||||
name: 'test-step',
|
name: 'test-step',
|
||||||
agent: 'test-agent',
|
agent: 'test-agent',
|
||||||
@ -42,7 +42,7 @@ describe('determineNextMovementByRules', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should return null when movement has no rules', () => {
|
it('should return null when movement has no rules', () => {
|
||||||
const step: WorkflowMovement = {
|
const step: PieceMovement = {
|
||||||
name: 'test-step',
|
name: 'test-step',
|
||||||
agent: 'test-agent',
|
agent: 'test-agent',
|
||||||
agentDisplayName: 'Test Agent',
|
agentDisplayName: 'Test Agent',
|
||||||
@ -63,7 +63,7 @@ describe('determineNextMovementByRules', () => {
|
|||||||
|
|
||||||
it('should return null when rule exists but next is undefined', () => {
|
it('should return null when rule exists but next is undefined', () => {
|
||||||
// Parallel sub-movement rules may omit `next` (optional field)
|
// Parallel sub-movement rules may omit `next` (optional field)
|
||||||
const step: WorkflowMovement = {
|
const step: PieceMovement = {
|
||||||
name: 'sub-step',
|
name: 'sub-step',
|
||||||
agent: 'test-agent',
|
agent: 'test-agent',
|
||||||
agentDisplayName: 'Test Agent',
|
agentDisplayName: 'Test Agent',
|
||||||
|
|||||||
@ -65,7 +65,7 @@ describe('createSessionLog', () => {
|
|||||||
|
|
||||||
expect(log.task).toBe('test task');
|
expect(log.task).toBe('test task');
|
||||||
expect(log.projectDir).toBe('/project');
|
expect(log.projectDir).toBe('/project');
|
||||||
expect(log.workflowName).toBe('default');
|
expect(log.pieceName).toBe('default');
|
||||||
expect(log.iterations).toBe(0);
|
expect(log.iterations).toBe(0);
|
||||||
expect(log.status).toBe('running');
|
expect(log.status).toBe('running');
|
||||||
expect(log.history).toEqual([]);
|
expect(log.history).toEqual([]);
|
||||||
|
|||||||
@ -1,303 +0,0 @@
|
|||||||
/**
|
|
||||||
* Tests for workflow category (subdirectory) support — Issue #85
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
|
||||||
import { mkdtempSync, writeFileSync, mkdirSync, rmSync } from 'node:fs';
|
|
||||||
import { join } from 'node:path';
|
|
||||||
import { tmpdir } from 'node:os';
|
|
||||||
import {
|
|
||||||
listWorkflows,
|
|
||||||
listWorkflowEntries,
|
|
||||||
loadAllWorkflows,
|
|
||||||
loadWorkflow,
|
|
||||||
} from '../infra/config/loaders/workflowLoader.js';
|
|
||||||
import type { WorkflowDirEntry } from '../infra/config/loaders/workflowLoader.js';
|
|
||||||
import {
|
|
||||||
buildWorkflowSelectionItems,
|
|
||||||
buildTopLevelSelectOptions,
|
|
||||||
parseCategorySelection,
|
|
||||||
buildCategoryWorkflowOptions,
|
|
||||||
type WorkflowSelectionItem,
|
|
||||||
} from '../features/workflowSelection/index.js';
|
|
||||||
|
|
||||||
const SAMPLE_WORKFLOW = `name: test-workflow
|
|
||||||
description: Test workflow
|
|
||||||
initial_movement: step1
|
|
||||||
max_iterations: 1
|
|
||||||
|
|
||||||
movements:
|
|
||||||
- name: step1
|
|
||||||
agent: coder
|
|
||||||
instruction: "{task}"
|
|
||||||
`;
|
|
||||||
|
|
||||||
function createWorkflow(dir: string, name: string, content?: string): void {
|
|
||||||
writeFileSync(join(dir, `${name}.yaml`), content ?? SAMPLE_WORKFLOW);
|
|
||||||
}
|
|
||||||
|
|
||||||
describe('workflow categories - directory scanning', () => {
|
|
||||||
let tempDir: string;
|
|
||||||
let workflowsDir: string;
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-cat-test-'));
|
|
||||||
workflowsDir = join(tempDir, '.takt', 'workflows');
|
|
||||||
mkdirSync(workflowsDir, { recursive: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
rmSync(tempDir, { recursive: true, force: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should discover root-level workflows', () => {
|
|
||||||
createWorkflow(workflowsDir, 'simple');
|
|
||||||
createWorkflow(workflowsDir, 'advanced');
|
|
||||||
|
|
||||||
const workflows = listWorkflows(tempDir);
|
|
||||||
expect(workflows).toContain('simple');
|
|
||||||
expect(workflows).toContain('advanced');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should discover workflows in subdirectories with category prefix', () => {
|
|
||||||
const frontendDir = join(workflowsDir, 'frontend');
|
|
||||||
mkdirSync(frontendDir);
|
|
||||||
createWorkflow(frontendDir, 'react');
|
|
||||||
createWorkflow(frontendDir, 'vue');
|
|
||||||
|
|
||||||
const workflows = listWorkflows(tempDir);
|
|
||||||
expect(workflows).toContain('frontend/react');
|
|
||||||
expect(workflows).toContain('frontend/vue');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should discover both root-level and categorized workflows', () => {
|
|
||||||
createWorkflow(workflowsDir, 'simple');
|
|
||||||
|
|
||||||
const frontendDir = join(workflowsDir, 'frontend');
|
|
||||||
mkdirSync(frontendDir);
|
|
||||||
createWorkflow(frontendDir, 'react');
|
|
||||||
|
|
||||||
const backendDir = join(workflowsDir, 'backend');
|
|
||||||
mkdirSync(backendDir);
|
|
||||||
createWorkflow(backendDir, 'api');
|
|
||||||
|
|
||||||
const workflows = listWorkflows(tempDir);
|
|
||||||
expect(workflows).toContain('simple');
|
|
||||||
expect(workflows).toContain('frontend/react');
|
|
||||||
expect(workflows).toContain('backend/api');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not scan deeper than 1 level', () => {
|
|
||||||
const deepDir = join(workflowsDir, 'category', 'subcategory');
|
|
||||||
mkdirSync(deepDir, { recursive: true });
|
|
||||||
createWorkflow(deepDir, 'deep');
|
|
||||||
|
|
||||||
const workflows = listWorkflows(tempDir);
|
|
||||||
// category/subcategory should be treated as a directory entry, not scanned further
|
|
||||||
expect(workflows).not.toContain('category/subcategory/deep');
|
|
||||||
// Only 1-level: category/deep would not exist since deep.yaml is in subcategory
|
|
||||||
expect(workflows).not.toContain('deep');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('workflow categories - listWorkflowEntries', () => {
|
|
||||||
let tempDir: string;
|
|
||||||
let workflowsDir: string;
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-cat-test-'));
|
|
||||||
workflowsDir = join(tempDir, '.takt', 'workflows');
|
|
||||||
mkdirSync(workflowsDir, { recursive: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
rmSync(tempDir, { recursive: true, force: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return entries with category information', () => {
|
|
||||||
createWorkflow(workflowsDir, 'simple');
|
|
||||||
|
|
||||||
const frontendDir = join(workflowsDir, 'frontend');
|
|
||||||
mkdirSync(frontendDir);
|
|
||||||
createWorkflow(frontendDir, 'react');
|
|
||||||
|
|
||||||
const entries = listWorkflowEntries(tempDir);
|
|
||||||
const simpleEntry = entries.find((e) => e.name === 'simple');
|
|
||||||
const reactEntry = entries.find((e) => e.name === 'frontend/react');
|
|
||||||
|
|
||||||
expect(simpleEntry).toBeDefined();
|
|
||||||
expect(simpleEntry!.category).toBeUndefined();
|
|
||||||
expect(simpleEntry!.source).toBe('project');
|
|
||||||
|
|
||||||
expect(reactEntry).toBeDefined();
|
|
||||||
expect(reactEntry!.category).toBe('frontend');
|
|
||||||
expect(reactEntry!.source).toBe('project');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('workflow categories - loadAllWorkflows', () => {
|
|
||||||
let tempDir: string;
|
|
||||||
let workflowsDir: string;
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-cat-test-'));
|
|
||||||
workflowsDir = join(tempDir, '.takt', 'workflows');
|
|
||||||
mkdirSync(workflowsDir, { recursive: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
rmSync(tempDir, { recursive: true, force: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should load categorized workflows with qualified names as keys', () => {
|
|
||||||
const frontendDir = join(workflowsDir, 'frontend');
|
|
||||||
mkdirSync(frontendDir);
|
|
||||||
createWorkflow(frontendDir, 'react');
|
|
||||||
|
|
||||||
const workflows = loadAllWorkflows(tempDir);
|
|
||||||
expect(workflows.has('frontend/react')).toBe(true);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('workflow categories - loadWorkflow', () => {
|
|
||||||
let tempDir: string;
|
|
||||||
let workflowsDir: string;
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-cat-test-'));
|
|
||||||
workflowsDir = join(tempDir, '.takt', 'workflows');
|
|
||||||
mkdirSync(workflowsDir, { recursive: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
rmSync(tempDir, { recursive: true, force: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should load workflow by category/name identifier', () => {
|
|
||||||
const frontendDir = join(workflowsDir, 'frontend');
|
|
||||||
mkdirSync(frontendDir);
|
|
||||||
createWorkflow(frontendDir, 'react');
|
|
||||||
|
|
||||||
const workflow = loadWorkflow('frontend/react', tempDir);
|
|
||||||
expect(workflow).not.toBeNull();
|
|
||||||
expect(workflow!.name).toBe('test-workflow');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return null for non-existent category/name', () => {
|
|
||||||
const workflow = loadWorkflow('nonexistent/workflow', tempDir);
|
|
||||||
expect(workflow).toBeNull();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should support .yml extension in subdirectories', () => {
|
|
||||||
const backendDir = join(workflowsDir, 'backend');
|
|
||||||
mkdirSync(backendDir);
|
|
||||||
writeFileSync(join(backendDir, 'api.yml'), SAMPLE_WORKFLOW);
|
|
||||||
|
|
||||||
const workflow = loadWorkflow('backend/api', tempDir);
|
|
||||||
expect(workflow).not.toBeNull();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('buildWorkflowSelectionItems', () => {
|
|
||||||
it('should separate root workflows and categories', () => {
|
|
||||||
const entries: WorkflowDirEntry[] = [
|
|
||||||
{ name: 'simple', path: '/tmp/simple.yaml', source: 'project' },
|
|
||||||
{ name: 'frontend/react', path: '/tmp/frontend/react.yaml', category: 'frontend', source: 'project' },
|
|
||||||
{ name: 'frontend/vue', path: '/tmp/frontend/vue.yaml', category: 'frontend', source: 'project' },
|
|
||||||
{ name: 'backend/api', path: '/tmp/backend/api.yaml', category: 'backend', source: 'project' },
|
|
||||||
];
|
|
||||||
|
|
||||||
const items = buildWorkflowSelectionItems(entries);
|
|
||||||
|
|
||||||
const workflows = items.filter((i) => i.type === 'workflow');
|
|
||||||
const categories = items.filter((i) => i.type === 'category');
|
|
||||||
|
|
||||||
expect(workflows).toHaveLength(1);
|
|
||||||
expect(workflows[0]!.name).toBe('simple');
|
|
||||||
|
|
||||||
expect(categories).toHaveLength(2);
|
|
||||||
const frontend = categories.find((c) => c.name === 'frontend');
|
|
||||||
expect(frontend).toBeDefined();
|
|
||||||
expect(frontend!.type === 'category' && frontend!.workflows).toEqual(['frontend/react', 'frontend/vue']);
|
|
||||||
|
|
||||||
const backend = categories.find((c) => c.name === 'backend');
|
|
||||||
expect(backend).toBeDefined();
|
|
||||||
expect(backend!.type === 'category' && backend!.workflows).toEqual(['backend/api']);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should sort items alphabetically', () => {
|
|
||||||
const entries: WorkflowDirEntry[] = [
|
|
||||||
{ name: 'zebra', path: '/tmp/zebra.yaml', source: 'project' },
|
|
||||||
{ name: 'alpha', path: '/tmp/alpha.yaml', source: 'project' },
|
|
||||||
{ name: 'misc/playground', path: '/tmp/misc/playground.yaml', category: 'misc', source: 'project' },
|
|
||||||
];
|
|
||||||
|
|
||||||
const items = buildWorkflowSelectionItems(entries);
|
|
||||||
const names = items.map((i) => i.name);
|
|
||||||
expect(names).toEqual(['alpha', 'misc', 'zebra']);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return empty array for empty input', () => {
|
|
||||||
const items = buildWorkflowSelectionItems([]);
|
|
||||||
expect(items).toEqual([]);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('2-stage category selection helpers', () => {
|
|
||||||
const items: WorkflowSelectionItem[] = [
|
|
||||||
{ type: 'workflow', name: 'simple' },
|
|
||||||
{ type: 'category', name: 'frontend', workflows: ['frontend/react', 'frontend/vue'] },
|
|
||||||
{ type: 'category', name: 'backend', workflows: ['backend/api'] },
|
|
||||||
];
|
|
||||||
|
|
||||||
describe('buildTopLevelSelectOptions', () => {
|
|
||||||
it('should encode categories with prefix in value', () => {
|
|
||||||
const options = buildTopLevelSelectOptions(items, '');
|
|
||||||
const categoryOption = options.find((o) => o.label.includes('frontend'));
|
|
||||||
expect(categoryOption).toBeDefined();
|
|
||||||
expect(categoryOption!.value).toBe('__category__:frontend');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should mark current workflow', () => {
|
|
||||||
const options = buildTopLevelSelectOptions(items, 'simple');
|
|
||||||
const simpleOption = options.find((o) => o.value === 'simple');
|
|
||||||
expect(simpleOption!.label).toContain('(current)');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should mark category containing current workflow', () => {
|
|
||||||
const options = buildTopLevelSelectOptions(items, 'frontend/react');
|
|
||||||
const frontendOption = options.find((o) => o.value === '__category__:frontend');
|
|
||||||
expect(frontendOption!.label).toContain('(current)');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('parseCategorySelection', () => {
|
|
||||||
it('should return category name for category selection', () => {
|
|
||||||
expect(parseCategorySelection('__category__:frontend')).toBe('frontend');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return null for direct workflow selection', () => {
|
|
||||||
expect(parseCategorySelection('simple')).toBeNull();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('buildCategoryWorkflowOptions', () => {
|
|
||||||
it('should return options for workflows in a category', () => {
|
|
||||||
const options = buildCategoryWorkflowOptions(items, 'frontend', '');
|
|
||||||
expect(options).not.toBeNull();
|
|
||||||
expect(options).toHaveLength(2);
|
|
||||||
expect(options![0]!.value).toBe('frontend/react');
|
|
||||||
expect(options![0]!.label).toBe('react');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should mark current workflow in category', () => {
|
|
||||||
const options = buildCategoryWorkflowOptions(items, 'frontend', 'frontend/vue');
|
|
||||||
const vueOption = options!.find((o) => o.value === 'frontend/vue');
|
|
||||||
expect(vueOption!.label).toContain('(current)');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return null for non-existent category', () => {
|
|
||||||
expect(buildCategoryWorkflowOptions(items, 'nonexistent', '')).toBeNull();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
@ -1,189 +0,0 @@
|
|||||||
/**
|
|
||||||
* Tests for isWorkflowPath and loadWorkflowByIdentifier
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
|
||||||
import { mkdtempSync, writeFileSync, mkdirSync, rmSync } from 'node:fs';
|
|
||||||
import { join } from 'node:path';
|
|
||||||
import { tmpdir } from 'node:os';
|
|
||||||
import {
|
|
||||||
isWorkflowPath,
|
|
||||||
loadWorkflowByIdentifier,
|
|
||||||
listWorkflows,
|
|
||||||
loadAllWorkflows,
|
|
||||||
} from '../infra/config/loaders/workflowLoader.js';
|
|
||||||
|
|
||||||
const SAMPLE_WORKFLOW = `name: test-workflow
|
|
||||||
description: Test workflow
|
|
||||||
initial_movement: step1
|
|
||||||
max_iterations: 1
|
|
||||||
|
|
||||||
movements:
|
|
||||||
- name: step1
|
|
||||||
agent: coder
|
|
||||||
instruction: "{task}"
|
|
||||||
`;
|
|
||||||
|
|
||||||
describe('isWorkflowPath', () => {
|
|
||||||
it('should return true for absolute paths', () => {
|
|
||||||
expect(isWorkflowPath('/path/to/workflow.yaml')).toBe(true);
|
|
||||||
expect(isWorkflowPath('/workflow')).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return true for home directory paths', () => {
|
|
||||||
expect(isWorkflowPath('~/workflow.yaml')).toBe(true);
|
|
||||||
expect(isWorkflowPath('~/.takt/pieces/custom.yaml')).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return true for relative paths starting with ./', () => {
|
|
||||||
expect(isWorkflowPath('./workflow.yaml')).toBe(true);
|
|
||||||
expect(isWorkflowPath('./subdir/workflow.yaml')).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return true for relative paths starting with ../', () => {
|
|
||||||
expect(isWorkflowPath('../workflow.yaml')).toBe(true);
|
|
||||||
expect(isWorkflowPath('../subdir/workflow.yaml')).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return true for paths ending with .yaml', () => {
|
|
||||||
expect(isWorkflowPath('custom.yaml')).toBe(true);
|
|
||||||
expect(isWorkflowPath('my-workflow.yaml')).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return true for paths ending with .yml', () => {
|
|
||||||
expect(isWorkflowPath('custom.yml')).toBe(true);
|
|
||||||
expect(isWorkflowPath('my-workflow.yml')).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return false for plain workflow names', () => {
|
|
||||||
expect(isWorkflowPath('default')).toBe(false);
|
|
||||||
expect(isWorkflowPath('simple')).toBe(false);
|
|
||||||
expect(isWorkflowPath('magi')).toBe(false);
|
|
||||||
expect(isWorkflowPath('my-custom-workflow')).toBe(false);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('loadWorkflowByIdentifier', () => {
|
|
||||||
let tempDir: string;
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-test-'));
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
rmSync(tempDir, { recursive: true, force: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should load workflow by name (builtin)', () => {
|
|
||||||
const workflow = loadWorkflowByIdentifier('default', process.cwd());
|
|
||||||
expect(workflow).not.toBeNull();
|
|
||||||
expect(workflow!.name).toBe('default');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should load workflow by absolute path', () => {
|
|
||||||
const filePath = join(tempDir, 'test.yaml');
|
|
||||||
writeFileSync(filePath, SAMPLE_WORKFLOW);
|
|
||||||
|
|
||||||
const workflow = loadWorkflowByIdentifier(filePath, tempDir);
|
|
||||||
expect(workflow).not.toBeNull();
|
|
||||||
expect(workflow!.name).toBe('test-workflow');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should load workflow by relative path', () => {
|
|
||||||
const filePath = join(tempDir, 'test.yaml');
|
|
||||||
writeFileSync(filePath, SAMPLE_WORKFLOW);
|
|
||||||
|
|
||||||
const workflow = loadWorkflowByIdentifier('./test.yaml', tempDir);
|
|
||||||
expect(workflow).not.toBeNull();
|
|
||||||
expect(workflow!.name).toBe('test-workflow');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should load workflow by filename with .yaml extension', () => {
|
|
||||||
const filePath = join(tempDir, 'test.yaml');
|
|
||||||
writeFileSync(filePath, SAMPLE_WORKFLOW);
|
|
||||||
|
|
||||||
const workflow = loadWorkflowByIdentifier('test.yaml', tempDir);
|
|
||||||
expect(workflow).not.toBeNull();
|
|
||||||
expect(workflow!.name).toBe('test-workflow');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return null for non-existent name', () => {
|
|
||||||
const workflow = loadWorkflowByIdentifier('non-existent-workflow-xyz', process.cwd());
|
|
||||||
expect(workflow).toBeNull();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return null for non-existent path', () => {
|
|
||||||
const workflow = loadWorkflowByIdentifier('./non-existent.yaml', tempDir);
|
|
||||||
expect(workflow).toBeNull();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('listWorkflows with project-local', () => {
|
|
||||||
let tempDir: string;
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-test-'));
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
rmSync(tempDir, { recursive: true, force: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should include project-local workflows when cwd is provided', () => {
|
|
||||||
const projectWorkflowsDir = join(tempDir, '.takt', 'workflows');
|
|
||||||
mkdirSync(projectWorkflowsDir, { recursive: true });
|
|
||||||
writeFileSync(join(projectWorkflowsDir, 'project-custom.yaml'), SAMPLE_WORKFLOW);
|
|
||||||
|
|
||||||
const workflows = listWorkflows(tempDir);
|
|
||||||
expect(workflows).toContain('project-custom');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should include builtin workflows regardless of cwd', () => {
|
|
||||||
const workflows = listWorkflows(tempDir);
|
|
||||||
expect(workflows).toContain('default');
|
|
||||||
});
|
|
||||||
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('loadAllWorkflows with project-local', () => {
|
|
||||||
let tempDir: string;
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-test-'));
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
rmSync(tempDir, { recursive: true, force: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should include project-local workflows when cwd is provided', () => {
|
|
||||||
const projectWorkflowsDir = join(tempDir, '.takt', 'workflows');
|
|
||||||
mkdirSync(projectWorkflowsDir, { recursive: true });
|
|
||||||
writeFileSync(join(projectWorkflowsDir, 'project-custom.yaml'), SAMPLE_WORKFLOW);
|
|
||||||
|
|
||||||
const workflows = loadAllWorkflows(tempDir);
|
|
||||||
expect(workflows.has('project-custom')).toBe(true);
|
|
||||||
expect(workflows.get('project-custom')!.name).toBe('test-workflow');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should have project-local override builtin when same name', () => {
|
|
||||||
const projectWorkflowsDir = join(tempDir, '.takt', 'workflows');
|
|
||||||
mkdirSync(projectWorkflowsDir, { recursive: true });
|
|
||||||
|
|
||||||
const overrideWorkflow = `name: project-override
|
|
||||||
description: Project override
|
|
||||||
initial_movement: step1
|
|
||||||
max_iterations: 1
|
|
||||||
|
|
||||||
movements:
|
|
||||||
- name: step1
|
|
||||||
agent: coder
|
|
||||||
instruction: "{task}"
|
|
||||||
`;
|
|
||||||
writeFileSync(join(projectWorkflowsDir, 'default.yaml'), overrideWorkflow);
|
|
||||||
|
|
||||||
const workflows = loadAllWorkflows(tempDir);
|
|
||||||
expect(workflows.get('default')!.name).toBe('project-override');
|
|
||||||
});
|
|
||||||
|
|
||||||
});
|
|
||||||
@ -19,7 +19,7 @@ export interface RunAgentOptions {
|
|||||||
allowedTools?: string[];
|
allowedTools?: string[];
|
||||||
/** Maximum number of agentic turns */
|
/** Maximum number of agentic turns */
|
||||||
maxTurns?: number;
|
maxTurns?: number;
|
||||||
/** Permission mode for tool execution (from workflow step) */
|
/** Permission mode for tool execution (from piece step) */
|
||||||
permissionMode?: PermissionMode;
|
permissionMode?: PermissionMode;
|
||||||
onStream?: StreamCallback;
|
onStream?: StreamCallback;
|
||||||
onPermissionRequest?: PermissionHandler;
|
onPermissionRequest?: PermissionHandler;
|
||||||
|
|||||||
@ -4,10 +4,10 @@
|
|||||||
* Registers all named subcommands (run, watch, add, list, switch, clear, eject, config, prompt).
|
* Registers all named subcommands (run, watch, add, list, switch, clear, eject, config, prompt).
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { clearAgentSessions, getCurrentWorkflow } from '../../infra/config/index.js';
|
import { clearAgentSessions, getCurrentPiece } from '../../infra/config/index.js';
|
||||||
import { success } from '../../shared/ui/index.js';
|
import { success } from '../../shared/ui/index.js';
|
||||||
import { runAllTasks, addTask, watchTasks, listTasks } from '../../features/tasks/index.js';
|
import { runAllTasks, addTask, watchTasks, listTasks } from '../../features/tasks/index.js';
|
||||||
import { switchWorkflow, switchConfig, ejectBuiltin } from '../../features/config/index.js';
|
import { switchPiece, switchConfig, ejectBuiltin } from '../../features/config/index.js';
|
||||||
import { previewPrompts } from '../../features/prompt/index.js';
|
import { previewPrompts } from '../../features/prompt/index.js';
|
||||||
import { program, resolvedCwd } from './program.js';
|
import { program, resolvedCwd } from './program.js';
|
||||||
import { resolveAgentOverrides } from './helpers.js';
|
import { resolveAgentOverrides } from './helpers.js';
|
||||||
@ -16,8 +16,8 @@ program
|
|||||||
.command('run')
|
.command('run')
|
||||||
.description('Run all pending tasks from .takt/tasks/')
|
.description('Run all pending tasks from .takt/tasks/')
|
||||||
.action(async () => {
|
.action(async () => {
|
||||||
const workflow = getCurrentWorkflow(resolvedCwd);
|
const piece = getCurrentPiece(resolvedCwd);
|
||||||
await runAllTasks(resolvedCwd, workflow, resolveAgentOverrides(program));
|
await runAllTasks(resolvedCwd, piece, resolveAgentOverrides(program));
|
||||||
});
|
});
|
||||||
|
|
||||||
program
|
program
|
||||||
@ -44,10 +44,10 @@ program
|
|||||||
|
|
||||||
program
|
program
|
||||||
.command('switch')
|
.command('switch')
|
||||||
.description('Switch workflow interactively')
|
.description('Switch piece interactively')
|
||||||
.argument('[workflow]', 'Workflow name')
|
.argument('[piece]', 'Piece name')
|
||||||
.action(async (workflow?: string) => {
|
.action(async (piece?: string) => {
|
||||||
await switchWorkflow(resolvedCwd, workflow);
|
await switchPiece(resolvedCwd, piece);
|
||||||
});
|
});
|
||||||
|
|
||||||
program
|
program
|
||||||
@ -60,7 +60,7 @@ program
|
|||||||
|
|
||||||
program
|
program
|
||||||
.command('eject')
|
.command('eject')
|
||||||
.description('Copy builtin workflow/agents to ~/.takt/ for customization')
|
.description('Copy builtin piece/agents to ~/.takt/ for customization')
|
||||||
.argument('[name]', 'Specific builtin to eject')
|
.argument('[name]', 'Specific builtin to eject')
|
||||||
.action(async (name?: string) => {
|
.action(async (name?: string) => {
|
||||||
await ejectBuiltin(name);
|
await ejectBuiltin(name);
|
||||||
@ -77,7 +77,7 @@ program
|
|||||||
program
|
program
|
||||||
.command('prompt')
|
.command('prompt')
|
||||||
.description('Preview assembled prompts for each movement and phase')
|
.description('Preview assembled prompts for each movement and phase')
|
||||||
.argument('[workflow]', 'Workflow name or path (defaults to current)')
|
.argument('[piece]', 'Piece name or path (defaults to current)')
|
||||||
.action(async (workflow?: string) => {
|
.action(async (piece?: string) => {
|
||||||
await previewPrompts(resolvedCwd, workflow);
|
await previewPrompts(resolvedCwd, piece);
|
||||||
});
|
});
|
||||||
|
|||||||
@ -42,7 +42,7 @@ program
|
|||||||
// --- Global options ---
|
// --- Global options ---
|
||||||
program
|
program
|
||||||
.option('-i, --issue <number>', 'GitHub issue number (equivalent to #N)', (val: string) => parseInt(val, 10))
|
.option('-i, --issue <number>', 'GitHub issue number (equivalent to #N)', (val: string) => parseInt(val, 10))
|
||||||
.option('-w, --workflow <name>', 'Workflow name or path to workflow file')
|
.option('-w, --piece <name>', 'Piece name or path to piece file')
|
||||||
.option('-b, --branch <name>', 'Branch name (auto-generated if omitted)')
|
.option('-b, --branch <name>', 'Branch name (auto-generated if omitted)')
|
||||||
.option('--auto-pr', 'Create PR after successful execution')
|
.option('--auto-pr', 'Create PR after successful execution')
|
||||||
.option('--repo <owner/repo>', 'Repository (defaults to current)')
|
.option('--repo <owner/repo>', 'Repository (defaults to current)')
|
||||||
|
|||||||
@ -8,11 +8,11 @@
|
|||||||
import { info, error } from '../../shared/ui/index.js';
|
import { info, error } from '../../shared/ui/index.js';
|
||||||
import { getErrorMessage } from '../../shared/utils/index.js';
|
import { getErrorMessage } from '../../shared/utils/index.js';
|
||||||
import { resolveIssueTask, isIssueReference } from '../../infra/github/index.js';
|
import { resolveIssueTask, isIssueReference } from '../../infra/github/index.js';
|
||||||
import { selectAndExecuteTask, determineWorkflow, type SelectAndExecuteOptions } from '../../features/tasks/index.js';
|
import { selectAndExecuteTask, determinePiece, type SelectAndExecuteOptions } from '../../features/tasks/index.js';
|
||||||
import { executePipeline } from '../../features/pipeline/index.js';
|
import { executePipeline } from '../../features/pipeline/index.js';
|
||||||
import { interactiveMode } from '../../features/interactive/index.js';
|
import { interactiveMode } from '../../features/interactive/index.js';
|
||||||
import { getWorkflowDescription } from '../../infra/config/index.js';
|
import { getPieceDescription } from '../../infra/config/index.js';
|
||||||
import { DEFAULT_WORKFLOW_NAME } from '../../shared/constants.js';
|
import { DEFAULT_PIECE_NAME } from '../../shared/constants.js';
|
||||||
import { program, resolvedCwd, pipelineMode } from './program.js';
|
import { program, resolvedCwd, pipelineMode } from './program.js';
|
||||||
import { resolveAgentOverrides, parseCreateWorktreeOption, isDirectTask } from './helpers.js';
|
import { resolveAgentOverrides, parseCreateWorktreeOption, isDirectTask } from './helpers.js';
|
||||||
|
|
||||||
@ -25,7 +25,7 @@ program
|
|||||||
const selectOptions: SelectAndExecuteOptions = {
|
const selectOptions: SelectAndExecuteOptions = {
|
||||||
autoPr: opts.autoPr === true,
|
autoPr: opts.autoPr === true,
|
||||||
repo: opts.repo as string | undefined,
|
repo: opts.repo as string | undefined,
|
||||||
workflow: opts.workflow as string | undefined,
|
piece: opts.piece as string | undefined,
|
||||||
createWorktree: createWorktreeOverride,
|
createWorktree: createWorktreeOverride,
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -34,7 +34,7 @@ program
|
|||||||
const exitCode = await executePipeline({
|
const exitCode = await executePipeline({
|
||||||
issueNumber: opts.issue as number | undefined,
|
issueNumber: opts.issue as number | undefined,
|
||||||
task: opts.task as string | undefined,
|
task: opts.task as string | undefined,
|
||||||
workflow: (opts.workflow as string | undefined) ?? DEFAULT_WORKFLOW_NAME,
|
piece: (opts.piece as string | undefined) ?? DEFAULT_PIECE_NAME,
|
||||||
branch: opts.branch as string | undefined,
|
branch: opts.branch as string | undefined,
|
||||||
autoPr: opts.autoPr === true,
|
autoPr: opts.autoPr === true,
|
||||||
repo: opts.repo as string | undefined,
|
repo: opts.repo as string | undefined,
|
||||||
@ -89,21 +89,21 @@ program
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Short single word or no task → interactive mode (with optional initial input)
|
// Short single word or no task → interactive mode (with optional initial input)
|
||||||
const workflowId = await determineWorkflow(resolvedCwd, selectOptions.workflow);
|
const pieceId = await determinePiece(resolvedCwd, selectOptions.piece);
|
||||||
if (workflowId === null) {
|
if (pieceId === null) {
|
||||||
info('Cancelled');
|
info('Cancelled');
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
const workflowContext = getWorkflowDescription(workflowId, resolvedCwd);
|
const pieceContext = getPieceDescription(pieceId, resolvedCwd);
|
||||||
const result = await interactiveMode(resolvedCwd, task, workflowContext);
|
const result = await interactiveMode(resolvedCwd, task, pieceContext);
|
||||||
|
|
||||||
if (!result.confirmed) {
|
if (!result.confirmed) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
selectOptions.interactiveUserInput = true;
|
selectOptions.interactiveUserInput = true;
|
||||||
selectOptions.workflow = workflowId;
|
selectOptions.piece = pieceId;
|
||||||
selectOptions.interactiveMetadata = { confirmed: result.confirmed, task: result.task };
|
selectOptions.interactiveMetadata = { confirmed: result.confirmed, task: result.task };
|
||||||
await selectAndExecuteTask(resolvedCwd, result.task, selectOptions, agentOverrides);
|
await selectAndExecuteTask(resolvedCwd, result.task, selectOptions, agentOverrides);
|
||||||
});
|
});
|
||||||
|
|||||||
@ -7,9 +7,9 @@ export type TaktConfig = z.infer<typeof TaktConfigSchema>;
|
|||||||
|
|
||||||
export const DEFAULT_CONFIG: TaktConfig = {
|
export const DEFAULT_CONFIG: TaktConfig = {
|
||||||
defaultModel: 'sonnet',
|
defaultModel: 'sonnet',
|
||||||
defaultWorkflow: 'default',
|
defaultPiece: 'default',
|
||||||
agentDirs: [],
|
agentDirs: [],
|
||||||
workflowDirs: [],
|
pieceDirs: [],
|
||||||
claude: {
|
claude: {
|
||||||
command: 'claude',
|
command: 'claude',
|
||||||
timeout: 300000,
|
timeout: 300000,
|
||||||
|
|||||||
@ -37,17 +37,17 @@ export interface PipelineConfig {
|
|||||||
export interface GlobalConfig {
|
export interface GlobalConfig {
|
||||||
language: Language;
|
language: Language;
|
||||||
trustedDirectories: string[];
|
trustedDirectories: string[];
|
||||||
defaultWorkflow: string;
|
defaultPiece: string;
|
||||||
logLevel: 'debug' | 'info' | 'warn' | 'error';
|
logLevel: 'debug' | 'info' | 'warn' | 'error';
|
||||||
provider?: 'claude' | 'codex' | 'mock';
|
provider?: 'claude' | 'codex' | 'mock';
|
||||||
model?: string;
|
model?: string;
|
||||||
debug?: DebugConfig;
|
debug?: DebugConfig;
|
||||||
/** Directory for shared clones (worktree_dir in config). If empty, uses ../{clone-name} relative to project */
|
/** Directory for shared clones (worktree_dir in config). If empty, uses ../{clone-name} relative to project */
|
||||||
worktreeDir?: string;
|
worktreeDir?: string;
|
||||||
/** List of builtin workflow/agent names to exclude from fallback loading */
|
/** List of builtin piece/agent names to exclude from fallback loading */
|
||||||
disabledBuiltins?: string[];
|
disabledBuiltins?: string[];
|
||||||
/** Enable builtin workflows from resources/global/{lang}/workflows */
|
/** Enable builtin pieces from resources/global/{lang}/pieces */
|
||||||
enableBuiltinWorkflows?: boolean;
|
enableBuiltinPieces?: boolean;
|
||||||
/** Anthropic API key for Claude Code SDK (overridden by TAKT_ANTHROPIC_API_KEY env var) */
|
/** Anthropic API key for Claude Code SDK (overridden by TAKT_ANTHROPIC_API_KEY env var) */
|
||||||
anthropicApiKey?: string;
|
anthropicApiKey?: string;
|
||||||
/** OpenAI API key for Codex SDK (overridden by TAKT_OPENAI_API_KEY env var) */
|
/** OpenAI API key for Codex SDK (overridden by TAKT_OPENAI_API_KEY env var) */
|
||||||
@ -58,13 +58,13 @@ export interface GlobalConfig {
|
|||||||
minimalOutput?: boolean;
|
minimalOutput?: boolean;
|
||||||
/** Path to bookmarks file (default: ~/.takt/preferences/bookmarks.yaml) */
|
/** Path to bookmarks file (default: ~/.takt/preferences/bookmarks.yaml) */
|
||||||
bookmarksFile?: string;
|
bookmarksFile?: string;
|
||||||
/** Path to workflow categories file (default: ~/.takt/preferences/workflow-categories.yaml) */
|
/** Path to piece categories file (default: ~/.takt/preferences/piece-categories.yaml) */
|
||||||
workflowCategoriesFile?: string;
|
pieceCategoriesFile?: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Project-level configuration */
|
/** Project-level configuration */
|
||||||
export interface ProjectConfig {
|
export interface ProjectConfig {
|
||||||
workflow?: string;
|
piece?: string;
|
||||||
agents?: CustomAgentConfig[];
|
agents?: CustomAgentConfig[];
|
||||||
provider?: 'claude' | 'codex' | 'mock';
|
provider?: 'claude' | 'codex' | 'mock';
|
||||||
}
|
}
|
||||||
|
|||||||
@ -8,11 +8,11 @@ export type {
|
|||||||
ReportObjectConfig,
|
ReportObjectConfig,
|
||||||
AgentResponse,
|
AgentResponse,
|
||||||
SessionState,
|
SessionState,
|
||||||
WorkflowRule,
|
PieceRule,
|
||||||
WorkflowMovement,
|
PieceMovement,
|
||||||
LoopDetectionConfig,
|
LoopDetectionConfig,
|
||||||
WorkflowConfig,
|
PieceConfig,
|
||||||
WorkflowState,
|
PieceState,
|
||||||
CustomAgentConfig,
|
CustomAgentConfig,
|
||||||
DebugConfig,
|
DebugConfig,
|
||||||
Language,
|
Language,
|
||||||
|
|||||||
@ -1,12 +1,12 @@
|
|||||||
/**
|
/**
|
||||||
* Workflow configuration and runtime state types
|
* Piece configuration and runtime state types
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import type { PermissionMode } from './status.js';
|
import type { PermissionMode } from './status.js';
|
||||||
import type { AgentResponse } from './response.js';
|
import type { AgentResponse } from './response.js';
|
||||||
|
|
||||||
/** Rule-based transition configuration (unified format) */
|
/** Rule-based transition configuration (unified format) */
|
||||||
export interface WorkflowRule {
|
export interface PieceRule {
|
||||||
/** Human-readable condition text */
|
/** Human-readable condition text */
|
||||||
condition: string;
|
condition: string;
|
||||||
/** Next movement name (e.g., implement, COMPLETE, ABORT). Optional for parallel sub-movements. */
|
/** Next movement name (e.g., implement, COMPLETE, ABORT). Optional for parallel sub-movements. */
|
||||||
@ -32,7 +32,7 @@ export interface WorkflowRule {
|
|||||||
aggregateConditionText?: string | string[];
|
aggregateConditionText?: string | string[];
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Report file configuration for a workflow movement (label: path pair) */
|
/** Report file configuration for a piece movement (label: path pair) */
|
||||||
export interface ReportConfig {
|
export interface ReportConfig {
|
||||||
/** Display label (e.g., "Scope", "Decisions") */
|
/** Display label (e.g., "Scope", "Decisions") */
|
||||||
label: string;
|
label: string;
|
||||||
@ -50,12 +50,12 @@ export interface ReportObjectConfig {
|
|||||||
format?: string;
|
format?: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Single movement in a workflow */
|
/** Single movement in a piece */
|
||||||
export interface WorkflowMovement {
|
export interface PieceMovement {
|
||||||
name: string;
|
name: string;
|
||||||
/** Brief description of this movement's role in the workflow */
|
/** Brief description of this movement's role in the piece */
|
||||||
description?: string;
|
description?: string;
|
||||||
/** Agent name, path, or inline prompt as specified in workflow YAML. Undefined when movement runs without an agent. */
|
/** Agent name, path, or inline prompt as specified in piece YAML. Undefined when movement runs without an agent. */
|
||||||
agent?: string;
|
agent?: string;
|
||||||
/** Session handling for this movement */
|
/** Session handling for this movement */
|
||||||
session?: 'continue' | 'refresh';
|
session?: 'continue' | 'refresh';
|
||||||
@ -75,12 +75,12 @@ export interface WorkflowMovement {
|
|||||||
edit?: boolean;
|
edit?: boolean;
|
||||||
instructionTemplate: string;
|
instructionTemplate: string;
|
||||||
/** Rules for movement routing */
|
/** Rules for movement routing */
|
||||||
rules?: WorkflowRule[];
|
rules?: PieceRule[];
|
||||||
/** Report file configuration. Single string, array of label:path, or object with order/format. */
|
/** Report file configuration. Single string, array of label:path, or object with order/format. */
|
||||||
report?: string | ReportConfig[] | ReportObjectConfig;
|
report?: string | ReportConfig[] | ReportObjectConfig;
|
||||||
passPreviousResponse: boolean;
|
passPreviousResponse: boolean;
|
||||||
/** Sub-movements to execute in parallel. When set, this movement runs all sub-movements concurrently. */
|
/** Sub-movements to execute in parallel. When set, this movement runs all sub-movements concurrently. */
|
||||||
parallel?: WorkflowMovement[];
|
parallel?: PieceMovement[];
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Loop detection configuration */
|
/** Loop detection configuration */
|
||||||
@ -91,11 +91,11 @@ export interface LoopDetectionConfig {
|
|||||||
action?: 'abort' | 'warn' | 'ignore';
|
action?: 'abort' | 'warn' | 'ignore';
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Workflow configuration */
|
/** Piece configuration */
|
||||||
export interface WorkflowConfig {
|
export interface PieceConfig {
|
||||||
name: string;
|
name: string;
|
||||||
description?: string;
|
description?: string;
|
||||||
movements: WorkflowMovement[];
|
movements: PieceMovement[];
|
||||||
initialMovement: string;
|
initialMovement: string;
|
||||||
maxIterations: number;
|
maxIterations: number;
|
||||||
/** Loop detection settings */
|
/** Loop detection settings */
|
||||||
@ -108,9 +108,9 @@ export interface WorkflowConfig {
|
|||||||
answerAgent?: string;
|
answerAgent?: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Runtime state of a workflow execution */
|
/** Runtime state of a piece execution */
|
||||||
export interface WorkflowState {
|
export interface PieceState {
|
||||||
workflowName: string;
|
pieceName: string;
|
||||||
currentMovement: string;
|
currentMovement: string;
|
||||||
iteration: number;
|
iteration: number;
|
||||||
movementOutputs: Map<string, AgentResponse>;
|
movementOutputs: Map<string, AgentResponse>;
|
||||||
@ -29,9 +29,9 @@ export const ClaudeConfigSchema = z.object({
|
|||||||
/** TAKT global tool configuration schema */
|
/** TAKT global tool configuration schema */
|
||||||
export const TaktConfigSchema = z.object({
|
export const TaktConfigSchema = z.object({
|
||||||
defaultModel: AgentModelSchema,
|
defaultModel: AgentModelSchema,
|
||||||
defaultWorkflow: z.string().default('default'),
|
defaultPiece: z.string().default('default'),
|
||||||
agentDirs: z.array(z.string()).default([]),
|
agentDirs: z.array(z.string()).default([]),
|
||||||
workflowDirs: z.array(z.string()).default([]),
|
pieceDirs: z.array(z.string()).default([]),
|
||||||
sessionDir: z.string().optional(),
|
sessionDir: z.string().optional(),
|
||||||
claude: ClaudeConfigSchema.default({ command: 'claude', timeout: 300000 }),
|
claude: ClaudeConfigSchema.default({ command: 'claude', timeout: 300000 }),
|
||||||
});
|
});
|
||||||
@ -100,7 +100,7 @@ export const ReportFieldSchema = z.union([
|
|||||||
]);
|
]);
|
||||||
|
|
||||||
/** Rule-based transition schema (new unified format) */
|
/** Rule-based transition schema (new unified format) */
|
||||||
export const WorkflowRuleSchema = z.object({
|
export const PieceRuleSchema = z.object({
|
||||||
/** Human-readable condition text */
|
/** Human-readable condition text */
|
||||||
condition: z.string().min(1),
|
condition: z.string().min(1),
|
||||||
/** Next movement name (e.g., implement, COMPLETE, ABORT). Optional for parallel sub-movements (parent handles routing). */
|
/** Next movement name (e.g., implement, COMPLETE, ABORT). Optional for parallel sub-movements (parent handles routing). */
|
||||||
@ -125,13 +125,13 @@ export const ParallelSubMovementRawSchema = z.object({
|
|||||||
edit: z.boolean().optional(),
|
edit: z.boolean().optional(),
|
||||||
instruction: z.string().optional(),
|
instruction: z.string().optional(),
|
||||||
instruction_template: z.string().optional(),
|
instruction_template: z.string().optional(),
|
||||||
rules: z.array(WorkflowRuleSchema).optional(),
|
rules: z.array(PieceRuleSchema).optional(),
|
||||||
report: ReportFieldSchema.optional(),
|
report: ReportFieldSchema.optional(),
|
||||||
pass_previous_response: z.boolean().optional().default(true),
|
pass_previous_response: z.boolean().optional().default(true),
|
||||||
});
|
});
|
||||||
|
|
||||||
/** Workflow movement schema - raw YAML format */
|
/** Piece movement schema - raw YAML format */
|
||||||
export const WorkflowMovementRawSchema = z.object({
|
export const PieceMovementRawSchema = z.object({
|
||||||
name: z.string().min(1),
|
name: z.string().min(1),
|
||||||
description: z.string().optional(),
|
description: z.string().optional(),
|
||||||
/** Agent is required for normal movements, optional for parallel container movements */
|
/** Agent is required for normal movements, optional for parallel container movements */
|
||||||
@ -150,7 +150,7 @@ export const WorkflowMovementRawSchema = z.object({
|
|||||||
instruction: z.string().optional(),
|
instruction: z.string().optional(),
|
||||||
instruction_template: z.string().optional(),
|
instruction_template: z.string().optional(),
|
||||||
/** Rules for movement routing */
|
/** Rules for movement routing */
|
||||||
rules: z.array(WorkflowRuleSchema).optional(),
|
rules: z.array(PieceRuleSchema).optional(),
|
||||||
/** Report file(s) for this movement */
|
/** Report file(s) for this movement */
|
||||||
report: ReportFieldSchema.optional(),
|
report: ReportFieldSchema.optional(),
|
||||||
pass_previous_response: z.boolean().optional().default(true),
|
pass_previous_response: z.boolean().optional().default(true),
|
||||||
@ -158,11 +158,11 @@ export const WorkflowMovementRawSchema = z.object({
|
|||||||
parallel: z.array(ParallelSubMovementRawSchema).optional(),
|
parallel: z.array(ParallelSubMovementRawSchema).optional(),
|
||||||
});
|
});
|
||||||
|
|
||||||
/** Workflow configuration schema - raw YAML format */
|
/** Piece configuration schema - raw YAML format */
|
||||||
export const WorkflowConfigRawSchema = z.object({
|
export const PieceConfigRawSchema = z.object({
|
||||||
name: z.string().min(1),
|
name: z.string().min(1),
|
||||||
description: z.string().optional(),
|
description: z.string().optional(),
|
||||||
movements: z.array(WorkflowMovementRawSchema).min(1),
|
movements: z.array(PieceMovementRawSchema).min(1),
|
||||||
initial_movement: z.string().optional(),
|
initial_movement: z.string().optional(),
|
||||||
max_iterations: z.number().int().positive().optional().default(10),
|
max_iterations: z.number().int().positive().optional().default(10),
|
||||||
answer_agent: z.string().optional(),
|
answer_agent: z.string().optional(),
|
||||||
@ -199,35 +199,35 @@ export const PipelineConfigSchema = z.object({
|
|||||||
pr_body_template: z.string().optional(),
|
pr_body_template: z.string().optional(),
|
||||||
});
|
});
|
||||||
|
|
||||||
/** Workflow category config schema (recursive) */
|
/** Piece category config schema (recursive) */
|
||||||
export type WorkflowCategoryConfigNode = {
|
export type PieceCategoryConfigNode = {
|
||||||
workflows?: string[];
|
pieces?: string[];
|
||||||
[key: string]: WorkflowCategoryConfigNode | string[] | undefined;
|
[key: string]: PieceCategoryConfigNode | string[] | undefined;
|
||||||
};
|
};
|
||||||
|
|
||||||
export const WorkflowCategoryConfigNodeSchema: z.ZodType<WorkflowCategoryConfigNode> = z.lazy(() =>
|
export const PieceCategoryConfigNodeSchema: z.ZodType<PieceCategoryConfigNode> = z.lazy(() =>
|
||||||
z.object({
|
z.object({
|
||||||
workflows: z.array(z.string()).optional(),
|
pieces: z.array(z.string()).optional(),
|
||||||
}).catchall(WorkflowCategoryConfigNodeSchema)
|
}).catchall(PieceCategoryConfigNodeSchema)
|
||||||
);
|
);
|
||||||
|
|
||||||
export const WorkflowCategoryConfigSchema = z.record(z.string(), WorkflowCategoryConfigNodeSchema);
|
export const PieceCategoryConfigSchema = z.record(z.string(), PieceCategoryConfigNodeSchema);
|
||||||
|
|
||||||
/** Global config schema */
|
/** Global config schema */
|
||||||
export const GlobalConfigSchema = z.object({
|
export const GlobalConfigSchema = z.object({
|
||||||
language: LanguageSchema.optional().default(DEFAULT_LANGUAGE),
|
language: LanguageSchema.optional().default(DEFAULT_LANGUAGE),
|
||||||
trusted_directories: z.array(z.string()).optional().default([]),
|
trusted_directories: z.array(z.string()).optional().default([]),
|
||||||
default_workflow: z.string().optional().default('default'),
|
default_piece: z.string().optional().default('default'),
|
||||||
log_level: z.enum(['debug', 'info', 'warn', 'error']).optional().default('info'),
|
log_level: z.enum(['debug', 'info', 'warn', 'error']).optional().default('info'),
|
||||||
provider: z.enum(['claude', 'codex', 'mock']).optional().default('claude'),
|
provider: z.enum(['claude', 'codex', 'mock']).optional().default('claude'),
|
||||||
model: z.string().optional(),
|
model: z.string().optional(),
|
||||||
debug: DebugConfigSchema.optional(),
|
debug: DebugConfigSchema.optional(),
|
||||||
/** Directory for shared clones (worktree_dir in config). If empty, uses ../{clone-name} relative to project */
|
/** Directory for shared clones (worktree_dir in config). If empty, uses ../{clone-name} relative to project */
|
||||||
worktree_dir: z.string().optional(),
|
worktree_dir: z.string().optional(),
|
||||||
/** List of builtin workflow/agent names to exclude from fallback loading */
|
/** List of builtin piece/agent names to exclude from fallback loading */
|
||||||
disabled_builtins: z.array(z.string()).optional().default([]),
|
disabled_builtins: z.array(z.string()).optional().default([]),
|
||||||
/** Enable builtin workflows from resources/global/{lang}/workflows */
|
/** Enable builtin pieces from resources/global/{lang}/pieces */
|
||||||
enable_builtin_workflows: z.boolean().optional(),
|
enable_builtin_pieces: z.boolean().optional(),
|
||||||
/** Anthropic API key for Claude Code SDK (overridden by TAKT_ANTHROPIC_API_KEY env var) */
|
/** Anthropic API key for Claude Code SDK (overridden by TAKT_ANTHROPIC_API_KEY env var) */
|
||||||
anthropic_api_key: z.string().optional(),
|
anthropic_api_key: z.string().optional(),
|
||||||
/** OpenAI API key for Codex SDK (overridden by TAKT_OPENAI_API_KEY env var) */
|
/** OpenAI API key for Codex SDK (overridden by TAKT_OPENAI_API_KEY env var) */
|
||||||
@ -238,13 +238,13 @@ export const GlobalConfigSchema = z.object({
|
|||||||
minimal_output: z.boolean().optional().default(false),
|
minimal_output: z.boolean().optional().default(false),
|
||||||
/** Path to bookmarks file (default: ~/.takt/preferences/bookmarks.yaml) */
|
/** Path to bookmarks file (default: ~/.takt/preferences/bookmarks.yaml) */
|
||||||
bookmarks_file: z.string().optional(),
|
bookmarks_file: z.string().optional(),
|
||||||
/** Path to workflow categories file (default: ~/.takt/preferences/workflow-categories.yaml) */
|
/** Path to piece categories file (default: ~/.takt/preferences/piece-categories.yaml) */
|
||||||
workflow_categories_file: z.string().optional(),
|
piece_categories_file: z.string().optional(),
|
||||||
});
|
});
|
||||||
|
|
||||||
/** Project config schema */
|
/** Project config schema */
|
||||||
export const ProjectConfigSchema = z.object({
|
export const ProjectConfigSchema = z.object({
|
||||||
workflow: z.string().optional(),
|
piece: z.string().optional(),
|
||||||
agents: z.array(CustomAgentConfigSchema).optional(),
|
agents: z.array(CustomAgentConfigSchema).optional(),
|
||||||
provider: z.enum(['claude', 'codex', 'mock']).optional(),
|
provider: z.enum(['claude', 'codex', 'mock']).optional(),
|
||||||
});
|
});
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user