commit
8384027f21
31
CHANGELOG.md
31
CHANGELOG.md
@ -4,6 +4,37 @@ All notable changes to this project will be documented in this file.
|
|||||||
|
|
||||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
|
||||||
|
|
||||||
|
## [0.11.0] - 2026-02-10
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- **`e2e-test` ビルトインピース**: E2Eテスト特化のピースを新規追加 — E2E分析 → E2E実装 → レビュー → 修正のフロー(VitestベースのE2Eテスト向け)
|
||||||
|
- **`error` ステータス**: プロバイダーエラーを `blocked` から分離し、エラー状態を明確に区別可能に。Codex にリトライ機構を追加
|
||||||
|
- **タスク YAML 一元管理**: タスクファイルの管理を `tasks.yaml` に統合。`TaskRecordSchema` による構造化されたタスクライフサイクル管理(pending/running/completed/failed)
|
||||||
|
- **タスク指示書ドキュメント**: タスク指示書の構造と目的を明文化 (#174)
|
||||||
|
- **レビューポリシー**: 共通レビューポリシーファセット(`builtins/{lang}/policies/review.md`)を追加
|
||||||
|
- **SIGINT グレースフルシャットダウンの E2E テスト**: 並列実行中の Ctrl+C 動作を検証する E2E テストを追加
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- **ビルトインピース簡素化**: 全ビルトインピースからトップレベルの `policies`/`personas`/`knowledge`/`instructions`/`report_formats` 宣言を削除し、名前ベースの暗黙的解決に移行。ピース YAML がよりシンプルに
|
||||||
|
- **ピースカテゴリ仕様更新**: カテゴリの設定・表示ロジックを改善。グローバル設定でのカテゴリ管理を強化 (#184)
|
||||||
|
- **`takt list` の優先度・参照改善**: ブランチ解決のパフォーマンス最適化。ベースコミットキャッシュの導入 (#186, #195, #196)
|
||||||
|
- **Ctrl+C シグナルハンドリング改善**: 並列実行中の SIGINT 処理を安定化
|
||||||
|
- **ループ防止ポリシー強化**: エージェントの無限ループを防止するためのポリシーを強化
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- オリジナル指示の差分処理が正しく動作しない問題を修正 (#181)
|
||||||
|
- タスク指示書のゴールが不適切にスコープ拡張される問題を修正 — ゴールを常に実装・実行に固定
|
||||||
|
|
||||||
|
### Internal
|
||||||
|
|
||||||
|
- タスク管理コードの大規模リファクタリング: `parser.ts` を廃止し `store.ts`/`mapper.ts`/`schema.ts`/`naming.ts` に分離。`branchGitResolver.ts`/`branchBaseCandidateResolver.ts`/`branchBaseRefCache.ts`/`branchEntryPointResolver.ts` でブランチ解決を細分化
|
||||||
|
- テストの大幅な拡充・リファクタリング: aggregate-evaluator, blocked-handler, branchGitResolver-performance, branchList-regression, buildListItems-performance, error-utils, escape, facet-resolution, getFilesChanged, global-pieceCategories, instruction-context, instruction-helpers, judgment-strategies, listTasksInteractivePendingLabel, loop-detector, naming, reportDir, resetCategories, rule-evaluator, rule-utils, slug, state-manager, switchPiece, task-schema, text, transitions, watchTasks 等を新規追加
|
||||||
|
- Codex クライアントのリファクタリング
|
||||||
|
- ピースパーサーのファセット解決ロジック改善
|
||||||
|
|
||||||
## [0.10.0] - 2026-02-09
|
## [0.10.0] - 2026-02-09
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|||||||
@ -465,6 +465,7 @@ TAKT includes multiple builtin pieces:
|
|||||||
| `review-only` | Read-only code review piece that makes no changes. |
|
| `review-only` | Read-only code review piece that makes no changes. |
|
||||||
| `structural-reform` | Full project review and structural reform: iterative codebase restructuring with staged file splits. |
|
| `structural-reform` | Full project review and structural reform: iterative codebase restructuring with staged file splits. |
|
||||||
| `unit-test` | Unit test focused piece: test analysis → test implementation → review → fix. |
|
| `unit-test` | Unit test focused piece: test analysis → test implementation → review → fix. |
|
||||||
|
| `e2e-test` | E2E test focused piece: E2E analysis → E2E implementation → review → fix (Vitest-based E2E flow). |
|
||||||
|
|
||||||
**Per-persona provider overrides:** Use `persona_providers` in config to route specific personas to different providers (e.g., coder on Codex, reviewers on Claude) without duplicating pieces.
|
**Per-persona provider overrides:** Use `persona_providers` in config to route specific personas to different providers (e.g., coder on Codex, reviewers on Claude) without duplicating pieces.
|
||||||
|
|
||||||
|
|||||||
51
builtins/en/instructions/implement-e2e-test.md
Normal file
51
builtins/en/instructions/implement-e2e-test.md
Normal file
@ -0,0 +1,51 @@
|
|||||||
|
Implement E2E tests according to the test plan.
|
||||||
|
Refer only to files within the Report Directory shown in the Piece Context. Do not search or reference other report directories.
|
||||||
|
|
||||||
|
**Actions:**
|
||||||
|
1. Review the test plan report
|
||||||
|
2. Implement or update tests following existing E2E layout (e.g., `e2e/specs/`)
|
||||||
|
3. Run E2E tests (minimum: `npm run test:e2e:mock`, and targeted spec runs when needed)
|
||||||
|
4. If tests fail, analyze root cause, fix test or code, and rerun
|
||||||
|
5. Confirm related existing tests are not broken
|
||||||
|
|
||||||
|
**Constraints:**
|
||||||
|
- Keep the current E2E framework (Vitest) unchanged
|
||||||
|
- Keep one scenario per test and make assertions explicit
|
||||||
|
- Reuse existing fixtures/helpers/mock strategy for external dependencies
|
||||||
|
|
||||||
|
**Scope output contract (create at the start of implementation):**
|
||||||
|
```markdown
|
||||||
|
# Change Scope Declaration
|
||||||
|
|
||||||
|
## Task
|
||||||
|
{One-line task summary}
|
||||||
|
|
||||||
|
## Planned changes
|
||||||
|
| Type | File |
|
||||||
|
|------|------|
|
||||||
|
| Create | `e2e/specs/example.e2e.ts` |
|
||||||
|
|
||||||
|
## Estimated size
|
||||||
|
Small / Medium / Large
|
||||||
|
|
||||||
|
## Impact area
|
||||||
|
- {Affected modules or features}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Decisions output contract (at implementation completion, only if decisions were made):**
|
||||||
|
```markdown
|
||||||
|
# Decision Log
|
||||||
|
|
||||||
|
## 1. {Decision}
|
||||||
|
- **Context**: {Why the decision was needed}
|
||||||
|
- **Options considered**: {List of options}
|
||||||
|
- **Rationale**: {Reason for the choice}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Required output (include headings)**
|
||||||
|
## Work results
|
||||||
|
- {Summary of actions taken}
|
||||||
|
## Changes made
|
||||||
|
- {Summary of changes}
|
||||||
|
## Test results
|
||||||
|
- {Command executed and results}
|
||||||
11
builtins/en/instructions/plan-e2e-test.md
Normal file
11
builtins/en/instructions/plan-e2e-test.md
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
Analyze the target code and identify missing E2E tests.
|
||||||
|
|
||||||
|
**Note:** If a Previous Response exists, this is a replan due to rejection.
|
||||||
|
Revise the test plan taking that feedback into account.
|
||||||
|
|
||||||
|
**Actions:**
|
||||||
|
1. Read target features, implementation, and existing E2E specs (`e2e/specs/**/*.e2e.ts`) to understand behavior
|
||||||
|
2. Summarize current E2E coverage (happy path, failure path, regression points)
|
||||||
|
3. Identify missing E2E scenarios with expected outcomes and observability points
|
||||||
|
4. Specify execution commands (`npm run test:e2e:mock` and, when needed, `npx vitest run e2e/specs/<target>.e2e.ts`)
|
||||||
|
5. Provide concrete guidance for failure analysis → fix → rerun workflow
|
||||||
@ -10,7 +10,10 @@ piece_categories:
|
|||||||
pieces:
|
pieces:
|
||||||
- review-fix-minimal
|
- review-fix-minimal
|
||||||
- review-only
|
- review-only
|
||||||
|
🧪 Testing:
|
||||||
|
pieces:
|
||||||
- unit-test
|
- unit-test
|
||||||
|
- e2e-test
|
||||||
🎨 Frontend: {}
|
🎨 Frontend: {}
|
||||||
⚙️ Backend: {}
|
⚙️ Backend: {}
|
||||||
🔧 Expert:
|
🔧 Expert:
|
||||||
|
|||||||
@ -1,24 +1,6 @@
|
|||||||
name: coding
|
name: coding
|
||||||
description: Lightweight development piece with planning and parallel reviews (plan -> implement -> parallel review -> complete)
|
description: Lightweight development piece with planning and parallel reviews (plan -> implement -> parallel review -> complete)
|
||||||
max_iterations: 20
|
max_iterations: 20
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
knowledge:
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
personas:
|
|
||||||
planner: ../personas/planner.md
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
|
||||||
instructions:
|
|
||||||
plan: ../instructions/plan.md
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
ai-review: ../instructions/ai-review.md
|
|
||||||
review-arch: ../instructions/review-arch.md
|
|
||||||
fix: ../instructions/fix.md
|
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
movements:
|
movements:
|
||||||
- name: plan
|
- name: plan
|
||||||
@ -150,7 +132,3 @@ movements:
|
|||||||
- condition: Cannot determine, insufficient information
|
- condition: Cannot determine, insufficient information
|
||||||
next: ABORT
|
next: ABORT
|
||||||
instruction: fix
|
instruction: fix
|
||||||
report_formats:
|
|
||||||
plan: ../output-contracts/plan.md
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
architecture-review: ../output-contracts/architecture-review.md
|
|
||||||
|
|||||||
@ -1,11 +1,6 @@
|
|||||||
name: compound-eye
|
name: compound-eye
|
||||||
description: Multi-model review - send the same instruction to Claude and Codex simultaneously, synthesize both responses
|
description: Multi-model review - send the same instruction to Claude and Codex simultaneously, synthesize both responses
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
knowledge:
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
personas:
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
supervisor: ../personas/supervisor.md
|
|
||||||
initial_movement: evaluate
|
initial_movement: evaluate
|
||||||
movements:
|
movements:
|
||||||
- name: evaluate
|
- name: evaluate
|
||||||
|
|||||||
@ -1,32 +1,6 @@
|
|||||||
name: default
|
name: default
|
||||||
description: Standard development piece with planning and specialized reviews
|
description: Standard development piece with planning and specialized reviews
|
||||||
max_iterations: 30
|
max_iterations: 30
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
qa: ../policies/qa.md
|
|
||||||
knowledge:
|
|
||||||
backend: ../knowledge/backend.md
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
personas:
|
|
||||||
planner: ../personas/planner.md
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
|
||||||
qa-reviewer: ../personas/qa-reviewer.md
|
|
||||||
supervisor: ../personas/supervisor.md
|
|
||||||
instructions:
|
|
||||||
plan: ../instructions/plan.md
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
ai-review: ../instructions/ai-review.md
|
|
||||||
ai-fix: ../instructions/ai-fix.md
|
|
||||||
arbitrate: ../instructions/arbitrate.md
|
|
||||||
review-arch: ../instructions/review-arch.md
|
|
||||||
review-qa: ../instructions/review-qa.md
|
|
||||||
fix: ../instructions/fix.md
|
|
||||||
supervise: ../instructions/supervise.md
|
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
loop_monitors:
|
loop_monitors:
|
||||||
- cycle:
|
- cycle:
|
||||||
@ -282,10 +256,3 @@ movements:
|
|||||||
report:
|
report:
|
||||||
- Validation: 07-supervisor-validation.md
|
- Validation: 07-supervisor-validation.md
|
||||||
- Summary: summary.md
|
- Summary: summary.md
|
||||||
report_formats:
|
|
||||||
plan: ../output-contracts/plan.md
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
architecture-review: ../output-contracts/architecture-review.md
|
|
||||||
qa-review: ../output-contracts/qa-review.md
|
|
||||||
validation: ../output-contracts/validation.md
|
|
||||||
summary: ../output-contracts/summary.md
|
|
||||||
|
|||||||
236
builtins/en/pieces/e2e-test.yaml
Normal file
236
builtins/en/pieces/e2e-test.yaml
Normal file
@ -0,0 +1,236 @@
|
|||||||
|
name: e2e-test
|
||||||
|
description: E2E test focused piece (E2E analysis → E2E implementation → review → fix)
|
||||||
|
max_iterations: 20
|
||||||
|
initial_movement: plan_test
|
||||||
|
loop_monitors:
|
||||||
|
- cycle:
|
||||||
|
- ai_review
|
||||||
|
- ai_fix
|
||||||
|
threshold: 3
|
||||||
|
judge:
|
||||||
|
persona: supervisor
|
||||||
|
instruction_template: |
|
||||||
|
The ai_review ↔ ai_fix loop has repeated {cycle_count} times.
|
||||||
|
|
||||||
|
Review the reports from each cycle and determine whether this loop
|
||||||
|
is healthy (making progress) or unproductive (repeating the same issues).
|
||||||
|
|
||||||
|
**Reports to reference:**
|
||||||
|
- AI Review results: {report:04-ai-review.md}
|
||||||
|
|
||||||
|
**Judgment criteria:**
|
||||||
|
- Are new issues being found/fixed in each cycle?
|
||||||
|
- Are the same findings being repeated?
|
||||||
|
- Are fixes actually being applied?
|
||||||
|
rules:
|
||||||
|
- condition: Healthy (making progress)
|
||||||
|
next: ai_review
|
||||||
|
- condition: Unproductive (no improvement)
|
||||||
|
next: review_test
|
||||||
|
movements:
|
||||||
|
- name: plan_test
|
||||||
|
edit: false
|
||||||
|
persona: test-planner
|
||||||
|
policy: testing
|
||||||
|
knowledge:
|
||||||
|
- architecture
|
||||||
|
- backend
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
- Bash
|
||||||
|
- WebSearch
|
||||||
|
- WebFetch
|
||||||
|
rules:
|
||||||
|
- condition: Test plan complete
|
||||||
|
next: implement_test
|
||||||
|
- condition: User is asking a question (not an E2E test task)
|
||||||
|
next: COMPLETE
|
||||||
|
- condition: Requirements unclear, insufficient info
|
||||||
|
next: ABORT
|
||||||
|
appendix: |
|
||||||
|
Clarifications needed:
|
||||||
|
- {Question 1}
|
||||||
|
- {Question 2}
|
||||||
|
instruction: plan-e2e-test
|
||||||
|
output_contracts:
|
||||||
|
report:
|
||||||
|
- name: 00-test-plan.md
|
||||||
|
format: test-plan
|
||||||
|
|
||||||
|
- name: implement_test
|
||||||
|
edit: true
|
||||||
|
persona: coder
|
||||||
|
policy:
|
||||||
|
- coding
|
||||||
|
- testing
|
||||||
|
session: refresh
|
||||||
|
knowledge:
|
||||||
|
- backend
|
||||||
|
- architecture
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
- Edit
|
||||||
|
- Write
|
||||||
|
- Bash
|
||||||
|
- WebSearch
|
||||||
|
- WebFetch
|
||||||
|
permission_mode: edit
|
||||||
|
rules:
|
||||||
|
- condition: Test implementation complete
|
||||||
|
next: ai_review
|
||||||
|
- condition: No implementation (report only)
|
||||||
|
next: ai_review
|
||||||
|
- condition: Cannot proceed, insufficient info
|
||||||
|
next: ai_review
|
||||||
|
- condition: User input required
|
||||||
|
next: implement_test
|
||||||
|
requires_user_input: true
|
||||||
|
interactive_only: true
|
||||||
|
instruction: implement-e2e-test
|
||||||
|
output_contracts:
|
||||||
|
report:
|
||||||
|
- Scope: 02-coder-scope.md
|
||||||
|
- Decisions: 03-coder-decisions.md
|
||||||
|
|
||||||
|
- name: ai_review
|
||||||
|
edit: false
|
||||||
|
persona: ai-antipattern-reviewer
|
||||||
|
policy:
|
||||||
|
- review
|
||||||
|
- ai-antipattern
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
- WebSearch
|
||||||
|
- WebFetch
|
||||||
|
rules:
|
||||||
|
- condition: No AI-specific issues
|
||||||
|
next: review_test
|
||||||
|
- condition: AI-specific issues found
|
||||||
|
next: ai_fix
|
||||||
|
instruction: ai-review
|
||||||
|
output_contracts:
|
||||||
|
report:
|
||||||
|
- name: 04-ai-review.md
|
||||||
|
format: ai-review
|
||||||
|
|
||||||
|
- name: ai_fix
|
||||||
|
edit: true
|
||||||
|
persona: coder
|
||||||
|
policy:
|
||||||
|
- coding
|
||||||
|
- testing
|
||||||
|
session: refresh
|
||||||
|
knowledge:
|
||||||
|
- backend
|
||||||
|
- architecture
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
- Edit
|
||||||
|
- Write
|
||||||
|
- Bash
|
||||||
|
- WebSearch
|
||||||
|
- WebFetch
|
||||||
|
permission_mode: edit
|
||||||
|
rules:
|
||||||
|
- condition: AI issues fixed
|
||||||
|
next: ai_review
|
||||||
|
- condition: No fix needed (verified target files/spec)
|
||||||
|
next: ai_no_fix
|
||||||
|
- condition: Cannot proceed, insufficient info
|
||||||
|
next: ai_no_fix
|
||||||
|
instruction: ai-fix
|
||||||
|
|
||||||
|
- name: ai_no_fix
|
||||||
|
edit: false
|
||||||
|
persona: architecture-reviewer
|
||||||
|
policy: review
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
rules:
|
||||||
|
- condition: ai_review's findings are valid (fix required)
|
||||||
|
next: ai_fix
|
||||||
|
- condition: ai_fix's judgment is valid (no fix needed)
|
||||||
|
next: review_test
|
||||||
|
instruction: arbitrate
|
||||||
|
|
||||||
|
- name: review_test
|
||||||
|
edit: false
|
||||||
|
persona: qa-reviewer
|
||||||
|
policy:
|
||||||
|
- review
|
||||||
|
- qa
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
- WebSearch
|
||||||
|
- WebFetch
|
||||||
|
rules:
|
||||||
|
- condition: approved
|
||||||
|
next: supervise
|
||||||
|
- condition: needs_fix
|
||||||
|
next: fix
|
||||||
|
instruction: review-test
|
||||||
|
output_contracts:
|
||||||
|
report:
|
||||||
|
- name: 05-qa-review.md
|
||||||
|
format: qa-review
|
||||||
|
|
||||||
|
- name: fix
|
||||||
|
edit: true
|
||||||
|
persona: coder
|
||||||
|
policy:
|
||||||
|
- coding
|
||||||
|
- testing
|
||||||
|
session: refresh
|
||||||
|
knowledge:
|
||||||
|
- backend
|
||||||
|
- architecture
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
- Edit
|
||||||
|
- Write
|
||||||
|
- Bash
|
||||||
|
- WebSearch
|
||||||
|
- WebFetch
|
||||||
|
permission_mode: edit
|
||||||
|
rules:
|
||||||
|
- condition: Fix complete
|
||||||
|
next: review_test
|
||||||
|
- condition: Cannot proceed, insufficient info
|
||||||
|
next: plan_test
|
||||||
|
instruction: fix
|
||||||
|
|
||||||
|
- name: supervise
|
||||||
|
edit: false
|
||||||
|
persona: supervisor
|
||||||
|
policy: review
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
- Bash
|
||||||
|
- WebSearch
|
||||||
|
- WebFetch
|
||||||
|
rules:
|
||||||
|
- condition: All checks passed
|
||||||
|
next: COMPLETE
|
||||||
|
- condition: Requirements unmet, tests failing, build errors
|
||||||
|
next: plan_test
|
||||||
|
instruction: supervise
|
||||||
|
output_contracts:
|
||||||
|
report:
|
||||||
|
- Validation: 06-supervisor-validation.md
|
||||||
|
- Summary: summary.md
|
||||||
@ -1,41 +1,6 @@
|
|||||||
name: expert-cqrs
|
name: expert-cqrs
|
||||||
description: CQRS+ES, Frontend, Security, QA Expert Review
|
description: CQRS+ES, Frontend, Security, QA Expert Review
|
||||||
max_iterations: 30
|
max_iterations: 30
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
qa: ../policies/qa.md
|
|
||||||
knowledge:
|
|
||||||
frontend: ../knowledge/frontend.md
|
|
||||||
backend: ../knowledge/backend.md
|
|
||||||
cqrs-es: ../knowledge/cqrs-es.md
|
|
||||||
security: ../knowledge/security.md
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
personas:
|
|
||||||
planner: ../personas/planner.md
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
|
||||||
cqrs-es-reviewer: ../personas/cqrs-es-reviewer.md
|
|
||||||
frontend-reviewer: ../personas/frontend-reviewer.md
|
|
||||||
security-reviewer: ../personas/security-reviewer.md
|
|
||||||
qa-reviewer: ../personas/qa-reviewer.md
|
|
||||||
expert-supervisor: ../personas/expert-supervisor.md
|
|
||||||
instructions:
|
|
||||||
plan: ../instructions/plan.md
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
ai-review: ../instructions/ai-review.md
|
|
||||||
ai-fix: ../instructions/ai-fix.md
|
|
||||||
arbitrate: ../instructions/arbitrate.md
|
|
||||||
review-cqrs-es: ../instructions/review-cqrs-es.md
|
|
||||||
review-frontend: ../instructions/review-frontend.md
|
|
||||||
review-security: ../instructions/review-security.md
|
|
||||||
review-qa: ../instructions/review-qa.md
|
|
||||||
fix: ../instructions/fix.md
|
|
||||||
supervise: ../instructions/supervise.md
|
|
||||||
fix-supervisor: ../instructions/fix-supervisor.md
|
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
movements:
|
movements:
|
||||||
- name: plan
|
- name: plan
|
||||||
@ -323,12 +288,3 @@ movements:
|
|||||||
next: supervise
|
next: supervise
|
||||||
- condition: Unable to proceed with fixes
|
- condition: Unable to proceed with fixes
|
||||||
next: plan
|
next: plan
|
||||||
report_formats:
|
|
||||||
plan: ../output-contracts/plan.md
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
cqrs-es-review: ../output-contracts/cqrs-es-review.md
|
|
||||||
frontend-review: ../output-contracts/frontend-review.md
|
|
||||||
security-review: ../output-contracts/security-review.md
|
|
||||||
qa-review: ../output-contracts/qa-review.md
|
|
||||||
validation: ../output-contracts/validation.md
|
|
||||||
summary: ../output-contracts/summary.md
|
|
||||||
|
|||||||
@ -1,39 +1,6 @@
|
|||||||
name: expert
|
name: expert
|
||||||
description: Architecture, Frontend, Security, QA Expert Review
|
description: Architecture, Frontend, Security, QA Expert Review
|
||||||
max_iterations: 30
|
max_iterations: 30
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
qa: ../policies/qa.md
|
|
||||||
knowledge:
|
|
||||||
frontend: ../knowledge/frontend.md
|
|
||||||
backend: ../knowledge/backend.md
|
|
||||||
security: ../knowledge/security.md
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
personas:
|
|
||||||
planner: ../personas/planner.md
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
|
||||||
frontend-reviewer: ../personas/frontend-reviewer.md
|
|
||||||
security-reviewer: ../personas/security-reviewer.md
|
|
||||||
qa-reviewer: ../personas/qa-reviewer.md
|
|
||||||
expert-supervisor: ../personas/expert-supervisor.md
|
|
||||||
instructions:
|
|
||||||
plan: ../instructions/plan.md
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
ai-review: ../instructions/ai-review.md
|
|
||||||
ai-fix: ../instructions/ai-fix.md
|
|
||||||
arbitrate: ../instructions/arbitrate.md
|
|
||||||
review-arch: ../instructions/review-arch.md
|
|
||||||
review-frontend: ../instructions/review-frontend.md
|
|
||||||
review-security: ../instructions/review-security.md
|
|
||||||
review-qa: ../instructions/review-qa.md
|
|
||||||
fix: ../instructions/fix.md
|
|
||||||
supervise: ../instructions/supervise.md
|
|
||||||
fix-supervisor: ../instructions/fix-supervisor.md
|
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
movements:
|
movements:
|
||||||
- name: plan
|
- name: plan
|
||||||
@ -317,12 +284,3 @@ movements:
|
|||||||
next: supervise
|
next: supervise
|
||||||
- condition: Unable to proceed with fixes
|
- condition: Unable to proceed with fixes
|
||||||
next: plan
|
next: plan
|
||||||
report_formats:
|
|
||||||
plan: ../output-contracts/plan.md
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
architecture-review: ../output-contracts/architecture-review.md
|
|
||||||
frontend-review: ../output-contracts/frontend-review.md
|
|
||||||
security-review: ../output-contracts/security-review.md
|
|
||||||
qa-review: ../output-contracts/qa-review.md
|
|
||||||
validation: ../output-contracts/validation.md
|
|
||||||
summary: ../output-contracts/summary.md
|
|
||||||
|
|||||||
@ -1,10 +1,6 @@
|
|||||||
name: magi
|
name: magi
|
||||||
description: MAGI Deliberation System - Analyze from 3 perspectives and decide by majority
|
description: MAGI Deliberation System - Analyze from 3 perspectives and decide by majority
|
||||||
max_iterations: 5
|
max_iterations: 5
|
||||||
personas:
|
|
||||||
melchior: ../personas/melchior.md
|
|
||||||
balthasar: ../personas/balthasar.md
|
|
||||||
casper: ../personas/casper.md
|
|
||||||
initial_movement: melchior
|
initial_movement: melchior
|
||||||
movements:
|
movements:
|
||||||
- name: melchior
|
- name: melchior
|
||||||
|
|||||||
@ -1,21 +1,6 @@
|
|||||||
name: minimal
|
name: minimal
|
||||||
description: Minimal development piece (implement -> parallel review -> fix if needed -> complete)
|
description: Minimal development piece (implement -> parallel review -> fix if needed -> complete)
|
||||||
max_iterations: 20
|
max_iterations: 20
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
personas:
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
supervisor: ../personas/supervisor.md
|
|
||||||
instructions:
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
review-ai: ../instructions/review-ai.md
|
|
||||||
ai-fix: ../instructions/ai-fix.md
|
|
||||||
supervise: ../instructions/supervise.md
|
|
||||||
fix-supervisor: ../instructions/fix-supervisor.md
|
|
||||||
initial_movement: implement
|
initial_movement: implement
|
||||||
movements:
|
movements:
|
||||||
- name: implement
|
- name: implement
|
||||||
@ -190,5 +175,3 @@ movements:
|
|||||||
- condition: Cannot proceed, insufficient info
|
- condition: Cannot proceed, insufficient info
|
||||||
next: implement
|
next: implement
|
||||||
instruction: fix-supervisor
|
instruction: fix-supervisor
|
||||||
report_formats:
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
|
|||||||
@ -1,11 +1,6 @@
|
|||||||
name: passthrough
|
name: passthrough
|
||||||
description: Single-agent thin wrapper. Pass task directly to coder as-is.
|
description: Single-agent thin wrapper. Pass task directly to coder as-is.
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
personas:
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
initial_movement: execute
|
initial_movement: execute
|
||||||
movements:
|
movements:
|
||||||
- name: execute
|
- name: execute
|
||||||
|
|||||||
@ -1,10 +1,6 @@
|
|||||||
name: research
|
name: research
|
||||||
description: Research piece - autonomously executes research without asking questions
|
description: Research piece - autonomously executes research without asking questions
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
personas:
|
|
||||||
research-planner: ../personas/research-planner.md
|
|
||||||
research-digger: ../personas/research-digger.md
|
|
||||||
research-supervisor: ../personas/research-supervisor.md
|
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
movements:
|
movements:
|
||||||
- name: plan
|
- name: plan
|
||||||
|
|||||||
@ -1,21 +1,6 @@
|
|||||||
name: review-fix-minimal
|
name: review-fix-minimal
|
||||||
description: Review and fix piece for existing code (starts with review, no implementation)
|
description: Review and fix piece for existing code (starts with review, no implementation)
|
||||||
max_iterations: 20
|
max_iterations: 20
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
personas:
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
supervisor: ../personas/supervisor.md
|
|
||||||
instructions:
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
review-ai: ../instructions/review-ai.md
|
|
||||||
ai-fix: ../instructions/ai-fix.md
|
|
||||||
supervise: ../instructions/supervise.md
|
|
||||||
fix-supervisor: ../instructions/fix-supervisor.md
|
|
||||||
initial_movement: reviewers
|
initial_movement: reviewers
|
||||||
movements:
|
movements:
|
||||||
- name: implement
|
- name: implement
|
||||||
@ -190,5 +175,3 @@ movements:
|
|||||||
- condition: Cannot proceed, insufficient info
|
- condition: Cannot proceed, insufficient info
|
||||||
next: implement
|
next: implement
|
||||||
instruction: fix-supervisor
|
instruction: fix-supervisor
|
||||||
report_formats:
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
|
|||||||
@ -1,23 +1,6 @@
|
|||||||
name: review-only
|
name: review-only
|
||||||
description: Review-only piece - reviews code without making edits
|
description: Review-only piece - reviews code without making edits
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
policies:
|
|
||||||
review: ../policies/review.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
knowledge:
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
security: ../knowledge/security.md
|
|
||||||
personas:
|
|
||||||
planner: ../personas/planner.md
|
|
||||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
|
||||||
security-reviewer: ../personas/security-reviewer.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
supervisor: ../personas/supervisor.md
|
|
||||||
pr-commenter: ../personas/pr-commenter.md
|
|
||||||
instructions:
|
|
||||||
review-arch: ../instructions/review-arch.md
|
|
||||||
review-security: ../instructions/review-security.md
|
|
||||||
review-ai: ../instructions/review-ai.md
|
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
movements:
|
movements:
|
||||||
- name: plan
|
- name: plan
|
||||||
@ -230,8 +213,3 @@ movements:
|
|||||||
---
|
---
|
||||||
*Generated by [takt](https://github.com/toruticas/takt) review-only piece*
|
*Generated by [takt](https://github.com/toruticas/takt) review-only piece*
|
||||||
```
|
```
|
||||||
report_formats:
|
|
||||||
architecture-review: ../output-contracts/architecture-review.md
|
|
||||||
security-review: ../output-contracts/security-review.md
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
review-summary: ../output-contracts/review-summary.md
|
|
||||||
|
|||||||
@ -1,25 +1,6 @@
|
|||||||
name: structural-reform
|
name: structural-reform
|
||||||
description: Full project review and structural reform - iterative codebase restructuring with staged file splits
|
description: Full project review and structural reform - iterative codebase restructuring with staged file splits
|
||||||
max_iterations: 50
|
max_iterations: 50
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
qa: ../policies/qa.md
|
|
||||||
knowledge:
|
|
||||||
backend: ../knowledge/backend.md
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
personas:
|
|
||||||
planner: ../personas/planner.md
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
|
||||||
qa-reviewer: ../personas/qa-reviewer.md
|
|
||||||
supervisor: ../personas/supervisor.md
|
|
||||||
instructions:
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
review-arch: ../instructions/review-arch.md
|
|
||||||
review-qa: ../instructions/review-qa.md
|
|
||||||
fix: ../instructions/fix.md
|
|
||||||
initial_movement: review
|
initial_movement: review
|
||||||
loop_monitors:
|
loop_monitors:
|
||||||
- cycle:
|
- cycle:
|
||||||
@ -447,9 +428,3 @@ movements:
|
|||||||
output_contracts:
|
output_contracts:
|
||||||
report:
|
report:
|
||||||
- name: 07-progress.md
|
- name: 07-progress.md
|
||||||
report_formats:
|
|
||||||
plan: ../output-contracts/plan.md
|
|
||||||
architecture-review: ../output-contracts/architecture-review.md
|
|
||||||
qa-review: ../output-contracts/qa-review.md
|
|
||||||
validation: ../output-contracts/validation.md
|
|
||||||
summary: ../output-contracts/summary.md
|
|
||||||
|
|||||||
@ -1,31 +1,6 @@
|
|||||||
name: unit-test
|
name: unit-test
|
||||||
description: Unit test focused piece (test analysis → test implementation → review → fix)
|
description: Unit test focused piece (test analysis → test implementation → review → fix)
|
||||||
max_iterations: 20
|
max_iterations: 20
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
qa: ../policies/qa.md
|
|
||||||
knowledge:
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
backend: ../knowledge/backend.md
|
|
||||||
personas:
|
|
||||||
test-planner: ../personas/test-planner.md
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
|
||||||
qa-reviewer: ../personas/qa-reviewer.md
|
|
||||||
supervisor: ../personas/supervisor.md
|
|
||||||
instructions:
|
|
||||||
plan-test: ../instructions/plan-test.md
|
|
||||||
implement-test: ../instructions/implement-test.md
|
|
||||||
ai-review: ../instructions/ai-review.md
|
|
||||||
ai-fix: ../instructions/ai-fix.md
|
|
||||||
arbitrate: ../instructions/arbitrate.md
|
|
||||||
review-test: ../instructions/review-test.md
|
|
||||||
fix: ../instructions/fix.md
|
|
||||||
supervise: ../instructions/supervise.md
|
|
||||||
initial_movement: plan_test
|
initial_movement: plan_test
|
||||||
loop_monitors:
|
loop_monitors:
|
||||||
- cycle:
|
- cycle:
|
||||||
@ -259,9 +234,3 @@ movements:
|
|||||||
report:
|
report:
|
||||||
- Validation: 06-supervisor-validation.md
|
- Validation: 06-supervisor-validation.md
|
||||||
- Summary: summary.md
|
- Summary: summary.md
|
||||||
report_formats:
|
|
||||||
test-plan: ../output-contracts/test-plan.md
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
qa-review: ../output-contracts/qa-review.md
|
|
||||||
validation: ../output-contracts/validation.md
|
|
||||||
summary: ../output-contracts/summary.md
|
|
||||||
|
|||||||
@ -98,6 +98,36 @@ To prevent circular rejections, track findings by ID.
|
|||||||
- Issues without `finding_id` are invalid (cannot be used as rejection grounds)
|
- Issues without `finding_id` are invalid (cannot be used as rejection grounds)
|
||||||
- REJECT is allowed only when there is at least one `new` or `persists` issue
|
- REJECT is allowed only when there is at least one `new` or `persists` issue
|
||||||
|
|
||||||
|
## Reopen Conditions (`resolved` -> open)
|
||||||
|
|
||||||
|
Reopening a resolved finding requires reproducible evidence.
|
||||||
|
|
||||||
|
- To reopen a previously `resolved` finding, all of the following are required
|
||||||
|
1. Reproduction steps (command/input)
|
||||||
|
2. Expected result vs. actual result
|
||||||
|
3. Failing file/line evidence
|
||||||
|
- If any of the three is missing, the reopen attempt is invalid (cannot be used as REJECT grounds)
|
||||||
|
- If reproduction conditions changed, treat it as a different problem and issue a new `finding_id`
|
||||||
|
|
||||||
|
## Immutable Meaning of `finding_id`
|
||||||
|
|
||||||
|
Do not mix different problems under the same ID.
|
||||||
|
|
||||||
|
- A `finding_id` must refer to one and only one problem
|
||||||
|
- If problem meaning, evidence files, or reproduction conditions change, issue a new `finding_id`
|
||||||
|
- Rewriting an existing `finding_id` to represent a different problem is prohibited
|
||||||
|
|
||||||
|
## Handling Test File Size and Duplication
|
||||||
|
|
||||||
|
Test file length and duplication are warning-level maintainability concerns by default.
|
||||||
|
|
||||||
|
- Excessive test file length and duplicated test setup are `Warning` by default
|
||||||
|
- They may be `REJECT` only when reproducible harm is shown
|
||||||
|
- flaky behavior
|
||||||
|
- false positives/false negatives
|
||||||
|
- inability to detect regressions
|
||||||
|
- "Too long" or "duplicated" alone is not sufficient for `REJECT`
|
||||||
|
|
||||||
## Boy Scout Rule
|
## Boy Scout Rule
|
||||||
|
|
||||||
Leave it better than you found it.
|
Leave it better than you found it.
|
||||||
|
|||||||
51
builtins/ja/instructions/implement-e2e-test.md
Normal file
51
builtins/ja/instructions/implement-e2e-test.md
Normal file
@ -0,0 +1,51 @@
|
|||||||
|
テスト計画に従ってE2Eテストを実装してください。
|
||||||
|
Piece Contextに示されたReport Directory内のファイルのみ参照してください。他のレポートディレクトリは検索/参照しないでください。
|
||||||
|
|
||||||
|
**やること:**
|
||||||
|
1. テスト計画のレポートを確認する
|
||||||
|
2. `e2e/specs/` など既存E2E配置に従ってテストを実装・更新する
|
||||||
|
3. E2Eテストを実行する(最低: `npm run test:e2e:mock`、必要に応じて対象specの単体実行)
|
||||||
|
4. 失敗時は原因を特定し、テストまたは対象コードを修正して再実行する
|
||||||
|
5. 既存の関連テストが壊れていないことを確認する
|
||||||
|
|
||||||
|
**実装の制約:**
|
||||||
|
- 既存のE2Eフレームワーク(Vitest)を変更しない
|
||||||
|
- テストは1シナリオ1関心で記述し、期待結果を明確にする
|
||||||
|
- 外部依存があるケースは既存のfixture/helper/mock方針に合わせる
|
||||||
|
|
||||||
|
**Scope出力契約(実装開始時に作成):**
|
||||||
|
```markdown
|
||||||
|
# 変更スコープ宣言
|
||||||
|
|
||||||
|
## タスク
|
||||||
|
{タスクの1行要約}
|
||||||
|
|
||||||
|
## 変更予定
|
||||||
|
| 種別 | ファイル |
|
||||||
|
|------|---------|
|
||||||
|
| 作成 | `e2e/specs/example.e2e.ts` |
|
||||||
|
|
||||||
|
## 推定規模
|
||||||
|
Small / Medium / Large
|
||||||
|
|
||||||
|
## 影響範囲
|
||||||
|
- {影響するモジュールや機能}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Decisions出力契約(実装完了時、決定がある場合のみ):**
|
||||||
|
```markdown
|
||||||
|
# 決定ログ
|
||||||
|
|
||||||
|
## 1. {決定内容}
|
||||||
|
- **背景**: {なぜ決定が必要だったか}
|
||||||
|
- **検討した選択肢**: {選択肢リスト}
|
||||||
|
- **理由**: {選んだ理由}
|
||||||
|
```
|
||||||
|
|
||||||
|
**必須出力(見出しを含める)**
|
||||||
|
## 作業結果
|
||||||
|
- {実施内容の要約}
|
||||||
|
## 変更内容
|
||||||
|
- {変更内容の要約}
|
||||||
|
## テスト結果
|
||||||
|
- {実行コマンドと結果}
|
||||||
11
builtins/ja/instructions/plan-e2e-test.md
Normal file
11
builtins/ja/instructions/plan-e2e-test.md
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
対象コードを分析し、不足しているE2Eテストを洗い出してください。
|
||||||
|
|
||||||
|
**注意:** Previous Responseがある場合は差し戻しのため、
|
||||||
|
その内容を踏まえてテスト計画を見直してください。
|
||||||
|
|
||||||
|
**やること:**
|
||||||
|
1. 対象機能の仕様・実装・既存E2Eテスト(`e2e/specs/**/*.e2e.ts`)を読み、振る舞いを理解する
|
||||||
|
2. 既存E2Eテストのカバー範囲(正常系・異常系・回帰観点)を整理する
|
||||||
|
3. 不足しているE2Eケースを洗い出す(シナリオ、期待結果、失敗時の観測点)
|
||||||
|
4. 実行コマンドを明記する(`npm run test:e2e:mock` / 必要時 `npx vitest run e2e/specs/<target>.e2e.ts`)
|
||||||
|
5. 実装者向けに、失敗解析→修正→再実行の手順を具体化する
|
||||||
@ -10,7 +10,10 @@ piece_categories:
|
|||||||
pieces:
|
pieces:
|
||||||
- review-fix-minimal
|
- review-fix-minimal
|
||||||
- review-only
|
- review-only
|
||||||
|
🧪 テスト:
|
||||||
|
pieces:
|
||||||
- unit-test
|
- unit-test
|
||||||
|
- e2e-test
|
||||||
🎨 フロントエンド: {}
|
🎨 フロントエンド: {}
|
||||||
⚙️ バックエンド: {}
|
⚙️ バックエンド: {}
|
||||||
🔧 エキスパート:
|
🔧 エキスパート:
|
||||||
|
|||||||
@ -1,24 +1,6 @@
|
|||||||
name: coding
|
name: coding
|
||||||
description: Lightweight development piece with planning and parallel reviews (plan -> implement -> parallel review -> complete)
|
description: Lightweight development piece with planning and parallel reviews (plan -> implement -> parallel review -> complete)
|
||||||
max_iterations: 20
|
max_iterations: 20
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
knowledge:
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
personas:
|
|
||||||
planner: ../personas/planner.md
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
|
||||||
instructions:
|
|
||||||
plan: ../instructions/plan.md
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
ai-review: ../instructions/ai-review.md
|
|
||||||
review-arch: ../instructions/review-arch.md
|
|
||||||
fix: ../instructions/fix.md
|
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
movements:
|
movements:
|
||||||
- name: plan
|
- name: plan
|
||||||
@ -150,7 +132,3 @@ movements:
|
|||||||
- condition: 判断できない、情報不足
|
- condition: 判断できない、情報不足
|
||||||
next: ABORT
|
next: ABORT
|
||||||
instruction: fix
|
instruction: fix
|
||||||
report_formats:
|
|
||||||
plan: ../output-contracts/plan.md
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
architecture-review: ../output-contracts/architecture-review.md
|
|
||||||
|
|||||||
@ -1,11 +1,6 @@
|
|||||||
name: compound-eye
|
name: compound-eye
|
||||||
description: 複眼レビュー - 同じ指示を Claude と Codex に同時に投げ、両者の回答を統合する
|
description: 複眼レビュー - 同じ指示を Claude と Codex に同時に投げ、両者の回答を統合する
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
knowledge:
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
personas:
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
supervisor: ../personas/supervisor.md
|
|
||||||
initial_movement: evaluate
|
initial_movement: evaluate
|
||||||
|
|
||||||
movements:
|
movements:
|
||||||
|
|||||||
@ -1,32 +1,6 @@
|
|||||||
name: default
|
name: default
|
||||||
description: Standard development piece with planning and specialized reviews
|
description: Standard development piece with planning and specialized reviews
|
||||||
max_iterations: 30
|
max_iterations: 30
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
qa: ../policies/qa.md
|
|
||||||
knowledge:
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
backend: ../knowledge/backend.md
|
|
||||||
personas:
|
|
||||||
planner: ../personas/planner.md
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
|
||||||
qa-reviewer: ../personas/qa-reviewer.md
|
|
||||||
supervisor: ../personas/supervisor.md
|
|
||||||
instructions:
|
|
||||||
plan: ../instructions/plan.md
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
ai-review: ../instructions/ai-review.md
|
|
||||||
ai-fix: ../instructions/ai-fix.md
|
|
||||||
arbitrate: ../instructions/arbitrate.md
|
|
||||||
review-arch: ../instructions/review-arch.md
|
|
||||||
review-qa: ../instructions/review-qa.md
|
|
||||||
fix: ../instructions/fix.md
|
|
||||||
supervise: ../instructions/supervise.md
|
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
loop_monitors:
|
loop_monitors:
|
||||||
- cycle:
|
- cycle:
|
||||||
@ -282,10 +256,3 @@ movements:
|
|||||||
report:
|
report:
|
||||||
- Validation: 07-supervisor-validation.md
|
- Validation: 07-supervisor-validation.md
|
||||||
- Summary: summary.md
|
- Summary: summary.md
|
||||||
report_formats:
|
|
||||||
plan: ../output-contracts/plan.md
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
architecture-review: ../output-contracts/architecture-review.md
|
|
||||||
qa-review: ../output-contracts/qa-review.md
|
|
||||||
validation: ../output-contracts/validation.md
|
|
||||||
summary: ../output-contracts/summary.md
|
|
||||||
|
|||||||
236
builtins/ja/pieces/e2e-test.yaml
Normal file
236
builtins/ja/pieces/e2e-test.yaml
Normal file
@ -0,0 +1,236 @@
|
|||||||
|
name: e2e-test
|
||||||
|
description: E2Eテスト追加に特化したピース(E2E分析→E2E実装→レビュー→修正)
|
||||||
|
max_iterations: 20
|
||||||
|
initial_movement: plan_test
|
||||||
|
loop_monitors:
|
||||||
|
- cycle:
|
||||||
|
- ai_review
|
||||||
|
- ai_fix
|
||||||
|
threshold: 3
|
||||||
|
judge:
|
||||||
|
persona: supervisor
|
||||||
|
instruction_template: |
|
||||||
|
ai_review と ai_fix のループが {cycle_count} 回繰り返されました。
|
||||||
|
|
||||||
|
各サイクルのレポートを確認し、このループが健全(進捗がある)か、
|
||||||
|
非生産的(同じ問題を繰り返している)かを判断してください。
|
||||||
|
|
||||||
|
**参照するレポート:**
|
||||||
|
- AIレビュー結果: {report:04-ai-review.md}
|
||||||
|
|
||||||
|
**判断基準:**
|
||||||
|
- 各サイクルで新しい問題が発見・修正されているか
|
||||||
|
- 同じ指摘が繰り返されていないか
|
||||||
|
- 修正が実際に反映されているか
|
||||||
|
rules:
|
||||||
|
- condition: 健全(進捗あり)
|
||||||
|
next: ai_review
|
||||||
|
- condition: 非生産的(改善なし)
|
||||||
|
next: review_test
|
||||||
|
movements:
|
||||||
|
- name: plan_test
|
||||||
|
edit: false
|
||||||
|
persona: test-planner
|
||||||
|
policy: testing
|
||||||
|
knowledge:
|
||||||
|
- architecture
|
||||||
|
- backend
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
- Bash
|
||||||
|
- WebSearch
|
||||||
|
- WebFetch
|
||||||
|
rules:
|
||||||
|
- condition: テスト計画が完了
|
||||||
|
next: implement_test
|
||||||
|
- condition: ユーザーが質問をしている(E2Eテスト追加タスクではない)
|
||||||
|
next: COMPLETE
|
||||||
|
- condition: 要件が不明確、情報不足
|
||||||
|
next: ABORT
|
||||||
|
appendix: |
|
||||||
|
確認事項:
|
||||||
|
- {質問1}
|
||||||
|
- {質問2}
|
||||||
|
instruction: plan-e2e-test
|
||||||
|
output_contracts:
|
||||||
|
report:
|
||||||
|
- name: 00-test-plan.md
|
||||||
|
format: test-plan
|
||||||
|
|
||||||
|
- name: implement_test
|
||||||
|
edit: true
|
||||||
|
persona: coder
|
||||||
|
policy:
|
||||||
|
- coding
|
||||||
|
- testing
|
||||||
|
session: refresh
|
||||||
|
knowledge:
|
||||||
|
- backend
|
||||||
|
- architecture
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
- Edit
|
||||||
|
- Write
|
||||||
|
- Bash
|
||||||
|
- WebSearch
|
||||||
|
- WebFetch
|
||||||
|
permission_mode: edit
|
||||||
|
rules:
|
||||||
|
- condition: テスト実装完了
|
||||||
|
next: ai_review
|
||||||
|
- condition: 実装未着手(レポートのみ)
|
||||||
|
next: ai_review
|
||||||
|
- condition: 判断できない、情報不足
|
||||||
|
next: ai_review
|
||||||
|
- condition: ユーザー入力が必要
|
||||||
|
next: implement_test
|
||||||
|
requires_user_input: true
|
||||||
|
interactive_only: true
|
||||||
|
instruction: implement-e2e-test
|
||||||
|
output_contracts:
|
||||||
|
report:
|
||||||
|
- Scope: 02-coder-scope.md
|
||||||
|
- Decisions: 03-coder-decisions.md
|
||||||
|
|
||||||
|
- name: ai_review
|
||||||
|
edit: false
|
||||||
|
persona: ai-antipattern-reviewer
|
||||||
|
policy:
|
||||||
|
- review
|
||||||
|
- ai-antipattern
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
- WebSearch
|
||||||
|
- WebFetch
|
||||||
|
rules:
|
||||||
|
- condition: AI特有の問題なし
|
||||||
|
next: review_test
|
||||||
|
- condition: AI特有の問題あり
|
||||||
|
next: ai_fix
|
||||||
|
instruction: ai-review
|
||||||
|
output_contracts:
|
||||||
|
report:
|
||||||
|
- name: 04-ai-review.md
|
||||||
|
format: ai-review
|
||||||
|
|
||||||
|
- name: ai_fix
|
||||||
|
edit: true
|
||||||
|
persona: coder
|
||||||
|
policy:
|
||||||
|
- coding
|
||||||
|
- testing
|
||||||
|
session: refresh
|
||||||
|
knowledge:
|
||||||
|
- backend
|
||||||
|
- architecture
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
- Edit
|
||||||
|
- Write
|
||||||
|
- Bash
|
||||||
|
- WebSearch
|
||||||
|
- WebFetch
|
||||||
|
permission_mode: edit
|
||||||
|
rules:
|
||||||
|
- condition: AI問題の修正完了
|
||||||
|
next: ai_review
|
||||||
|
- condition: 修正不要(指摘対象ファイル/仕様の確認済み)
|
||||||
|
next: ai_no_fix
|
||||||
|
- condition: 判断できない、情報不足
|
||||||
|
next: ai_no_fix
|
||||||
|
instruction: ai-fix
|
||||||
|
|
||||||
|
- name: ai_no_fix
|
||||||
|
edit: false
|
||||||
|
persona: architecture-reviewer
|
||||||
|
policy: review
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
rules:
|
||||||
|
- condition: ai_reviewの指摘が妥当(修正すべき)
|
||||||
|
next: ai_fix
|
||||||
|
- condition: ai_fixの判断が妥当(修正不要)
|
||||||
|
next: review_test
|
||||||
|
instruction: arbitrate
|
||||||
|
|
||||||
|
- name: review_test
|
||||||
|
edit: false
|
||||||
|
persona: qa-reviewer
|
||||||
|
policy:
|
||||||
|
- review
|
||||||
|
- qa
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
- WebSearch
|
||||||
|
- WebFetch
|
||||||
|
rules:
|
||||||
|
- condition: approved
|
||||||
|
next: supervise
|
||||||
|
- condition: needs_fix
|
||||||
|
next: fix
|
||||||
|
instruction: review-test
|
||||||
|
output_contracts:
|
||||||
|
report:
|
||||||
|
- name: 05-qa-review.md
|
||||||
|
format: qa-review
|
||||||
|
|
||||||
|
- name: fix
|
||||||
|
edit: true
|
||||||
|
persona: coder
|
||||||
|
policy:
|
||||||
|
- coding
|
||||||
|
- testing
|
||||||
|
session: refresh
|
||||||
|
knowledge:
|
||||||
|
- backend
|
||||||
|
- architecture
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
- Edit
|
||||||
|
- Write
|
||||||
|
- Bash
|
||||||
|
- WebSearch
|
||||||
|
- WebFetch
|
||||||
|
permission_mode: edit
|
||||||
|
rules:
|
||||||
|
- condition: 修正完了
|
||||||
|
next: review_test
|
||||||
|
- condition: 判断できない、情報不足
|
||||||
|
next: plan_test
|
||||||
|
instruction: fix
|
||||||
|
|
||||||
|
- name: supervise
|
||||||
|
edit: false
|
||||||
|
persona: supervisor
|
||||||
|
policy: review
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
- Bash
|
||||||
|
- WebSearch
|
||||||
|
- WebFetch
|
||||||
|
rules:
|
||||||
|
- condition: すべて問題なし
|
||||||
|
next: COMPLETE
|
||||||
|
- condition: 要求未達成、テスト失敗、ビルドエラー
|
||||||
|
next: plan_test
|
||||||
|
instruction: supervise
|
||||||
|
output_contracts:
|
||||||
|
report:
|
||||||
|
- Validation: 06-supervisor-validation.md
|
||||||
|
- Summary: summary.md
|
||||||
@ -1,41 +1,6 @@
|
|||||||
name: expert-cqrs
|
name: expert-cqrs
|
||||||
description: CQRS+ES・フロントエンド・セキュリティ・QA専門家レビュー
|
description: CQRS+ES・フロントエンド・セキュリティ・QA専門家レビュー
|
||||||
max_iterations: 30
|
max_iterations: 30
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
qa: ../policies/qa.md
|
|
||||||
knowledge:
|
|
||||||
frontend: ../knowledge/frontend.md
|
|
||||||
backend: ../knowledge/backend.md
|
|
||||||
cqrs-es: ../knowledge/cqrs-es.md
|
|
||||||
security: ../knowledge/security.md
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
personas:
|
|
||||||
planner: ../personas/planner.md
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
|
||||||
cqrs-es-reviewer: ../personas/cqrs-es-reviewer.md
|
|
||||||
frontend-reviewer: ../personas/frontend-reviewer.md
|
|
||||||
security-reviewer: ../personas/security-reviewer.md
|
|
||||||
qa-reviewer: ../personas/qa-reviewer.md
|
|
||||||
expert-supervisor: ../personas/expert-supervisor.md
|
|
||||||
instructions:
|
|
||||||
plan: ../instructions/plan.md
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
ai-review: ../instructions/ai-review.md
|
|
||||||
ai-fix: ../instructions/ai-fix.md
|
|
||||||
arbitrate: ../instructions/arbitrate.md
|
|
||||||
review-cqrs-es: ../instructions/review-cqrs-es.md
|
|
||||||
review-frontend: ../instructions/review-frontend.md
|
|
||||||
review-security: ../instructions/review-security.md
|
|
||||||
review-qa: ../instructions/review-qa.md
|
|
||||||
fix: ../instructions/fix.md
|
|
||||||
supervise: ../instructions/supervise.md
|
|
||||||
fix-supervisor: ../instructions/fix-supervisor.md
|
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
movements:
|
movements:
|
||||||
- name: plan
|
- name: plan
|
||||||
@ -323,12 +288,3 @@ movements:
|
|||||||
next: supervise
|
next: supervise
|
||||||
- condition: 修正を進行できない
|
- condition: 修正を進行できない
|
||||||
next: plan
|
next: plan
|
||||||
report_formats:
|
|
||||||
plan: ../output-contracts/plan.md
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
cqrs-es-review: ../output-contracts/cqrs-es-review.md
|
|
||||||
frontend-review: ../output-contracts/frontend-review.md
|
|
||||||
security-review: ../output-contracts/security-review.md
|
|
||||||
qa-review: ../output-contracts/qa-review.md
|
|
||||||
validation: ../output-contracts/validation.md
|
|
||||||
summary: ../output-contracts/summary.md
|
|
||||||
|
|||||||
@ -1,39 +1,6 @@
|
|||||||
name: expert
|
name: expert
|
||||||
description: アーキテクチャ・フロントエンド・セキュリティ・QA専門家レビュー
|
description: アーキテクチャ・フロントエンド・セキュリティ・QA専門家レビュー
|
||||||
max_iterations: 30
|
max_iterations: 30
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
qa: ../policies/qa.md
|
|
||||||
knowledge:
|
|
||||||
frontend: ../knowledge/frontend.md
|
|
||||||
backend: ../knowledge/backend.md
|
|
||||||
security: ../knowledge/security.md
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
personas:
|
|
||||||
planner: ../personas/planner.md
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
|
||||||
frontend-reviewer: ../personas/frontend-reviewer.md
|
|
||||||
security-reviewer: ../personas/security-reviewer.md
|
|
||||||
qa-reviewer: ../personas/qa-reviewer.md
|
|
||||||
expert-supervisor: ../personas/expert-supervisor.md
|
|
||||||
instructions:
|
|
||||||
plan: ../instructions/plan.md
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
ai-review: ../instructions/ai-review.md
|
|
||||||
ai-fix: ../instructions/ai-fix.md
|
|
||||||
arbitrate: ../instructions/arbitrate.md
|
|
||||||
review-arch: ../instructions/review-arch.md
|
|
||||||
review-frontend: ../instructions/review-frontend.md
|
|
||||||
review-security: ../instructions/review-security.md
|
|
||||||
review-qa: ../instructions/review-qa.md
|
|
||||||
fix: ../instructions/fix.md
|
|
||||||
supervise: ../instructions/supervise.md
|
|
||||||
fix-supervisor: ../instructions/fix-supervisor.md
|
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
movements:
|
movements:
|
||||||
- name: plan
|
- name: plan
|
||||||
@ -317,12 +284,3 @@ movements:
|
|||||||
next: supervise
|
next: supervise
|
||||||
- condition: 修正を進行できない
|
- condition: 修正を進行できない
|
||||||
next: plan
|
next: plan
|
||||||
report_formats:
|
|
||||||
plan: ../output-contracts/plan.md
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
architecture-review: ../output-contracts/architecture-review.md
|
|
||||||
frontend-review: ../output-contracts/frontend-review.md
|
|
||||||
security-review: ../output-contracts/security-review.md
|
|
||||||
qa-review: ../output-contracts/qa-review.md
|
|
||||||
validation: ../output-contracts/validation.md
|
|
||||||
summary: ../output-contracts/summary.md
|
|
||||||
|
|||||||
@ -1,10 +1,6 @@
|
|||||||
name: magi
|
name: magi
|
||||||
description: MAGI合議システム - 3つの観点から分析し多数決で判定
|
description: MAGI合議システム - 3つの観点から分析し多数決で判定
|
||||||
max_iterations: 5
|
max_iterations: 5
|
||||||
personas:
|
|
||||||
melchior: ../personas/melchior.md
|
|
||||||
balthasar: ../personas/balthasar.md
|
|
||||||
casper: ../personas/casper.md
|
|
||||||
initial_movement: melchior
|
initial_movement: melchior
|
||||||
movements:
|
movements:
|
||||||
- name: melchior
|
- name: melchior
|
||||||
|
|||||||
@ -1,21 +1,6 @@
|
|||||||
name: minimal
|
name: minimal
|
||||||
description: Minimal development piece (implement -> parallel review -> fix if needed -> complete)
|
description: Minimal development piece (implement -> parallel review -> fix if needed -> complete)
|
||||||
max_iterations: 20
|
max_iterations: 20
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
personas:
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
supervisor: ../personas/supervisor.md
|
|
||||||
instructions:
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
review-ai: ../instructions/review-ai.md
|
|
||||||
ai-fix: ../instructions/ai-fix.md
|
|
||||||
supervise: ../instructions/supervise.md
|
|
||||||
fix-supervisor: ../instructions/fix-supervisor.md
|
|
||||||
initial_movement: implement
|
initial_movement: implement
|
||||||
movements:
|
movements:
|
||||||
- name: implement
|
- name: implement
|
||||||
@ -190,5 +175,3 @@ movements:
|
|||||||
- condition: 修正を進行できない
|
- condition: 修正を進行できない
|
||||||
next: implement
|
next: implement
|
||||||
instruction: fix-supervisor
|
instruction: fix-supervisor
|
||||||
report_formats:
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
|
|||||||
@ -1,11 +1,6 @@
|
|||||||
name: passthrough
|
name: passthrough
|
||||||
description: Single-agent thin wrapper. Pass task directly to coder as-is.
|
description: Single-agent thin wrapper. Pass task directly to coder as-is.
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
personas:
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
initial_movement: execute
|
initial_movement: execute
|
||||||
movements:
|
movements:
|
||||||
- name: execute
|
- name: execute
|
||||||
|
|||||||
@ -1,10 +1,6 @@
|
|||||||
name: research
|
name: research
|
||||||
description: 調査ピース - 質問せずに自律的に調査を実行
|
description: 調査ピース - 質問せずに自律的に調査を実行
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
personas:
|
|
||||||
research-planner: ../personas/research-planner.md
|
|
||||||
research-digger: ../personas/research-digger.md
|
|
||||||
research-supervisor: ../personas/research-supervisor.md
|
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
movements:
|
movements:
|
||||||
- name: plan
|
- name: plan
|
||||||
|
|||||||
@ -1,21 +1,6 @@
|
|||||||
name: review-fix-minimal
|
name: review-fix-minimal
|
||||||
description: 既存コードのレビューと修正ピース(レビュー開始、実装なし)
|
description: 既存コードのレビューと修正ピース(レビュー開始、実装なし)
|
||||||
max_iterations: 20
|
max_iterations: 20
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
personas:
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
supervisor: ../personas/supervisor.md
|
|
||||||
instructions:
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
review-ai: ../instructions/review-ai.md
|
|
||||||
ai-fix: ../instructions/ai-fix.md
|
|
||||||
supervise: ../instructions/supervise.md
|
|
||||||
fix-supervisor: ../instructions/fix-supervisor.md
|
|
||||||
initial_movement: reviewers
|
initial_movement: reviewers
|
||||||
movements:
|
movements:
|
||||||
- name: implement
|
- name: implement
|
||||||
@ -190,5 +175,3 @@ movements:
|
|||||||
- condition: 修正を進行できない
|
- condition: 修正を進行できない
|
||||||
next: implement
|
next: implement
|
||||||
instruction: fix-supervisor
|
instruction: fix-supervisor
|
||||||
report_formats:
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
|
|||||||
@ -1,23 +1,6 @@
|
|||||||
name: review-only
|
name: review-only
|
||||||
description: レビュー専用ピース - コードをレビューするだけで編集は行わない
|
description: レビュー専用ピース - コードをレビューするだけで編集は行わない
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
policies:
|
|
||||||
review: ../policies/review.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
knowledge:
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
security: ../knowledge/security.md
|
|
||||||
personas:
|
|
||||||
planner: ../personas/planner.md
|
|
||||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
|
||||||
security-reviewer: ../personas/security-reviewer.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
supervisor: ../personas/supervisor.md
|
|
||||||
pr-commenter: ../personas/pr-commenter.md
|
|
||||||
instructions:
|
|
||||||
review-arch: ../instructions/review-arch.md
|
|
||||||
review-security: ../instructions/review-security.md
|
|
||||||
review-ai: ../instructions/review-ai.md
|
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
movements:
|
movements:
|
||||||
- name: plan
|
- name: plan
|
||||||
@ -231,8 +214,3 @@ movements:
|
|||||||
---
|
---
|
||||||
*[takt](https://github.com/toruticas/takt) review-only ピースで生成*
|
*[takt](https://github.com/toruticas/takt) review-only ピースで生成*
|
||||||
```
|
```
|
||||||
report_formats:
|
|
||||||
architecture-review: ../output-contracts/architecture-review.md
|
|
||||||
security-review: ../output-contracts/security-review.md
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
review-summary: ../output-contracts/review-summary.md
|
|
||||||
|
|||||||
@ -1,25 +1,6 @@
|
|||||||
name: structural-reform
|
name: structural-reform
|
||||||
description: プロジェクト全体レビューと構造改革 - 段階的なファイル分割による反復的コードベース再構築
|
description: プロジェクト全体レビューと構造改革 - 段階的なファイル分割による反復的コードベース再構築
|
||||||
max_iterations: 50
|
max_iterations: 50
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
qa: ../policies/qa.md
|
|
||||||
knowledge:
|
|
||||||
backend: ../knowledge/backend.md
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
personas:
|
|
||||||
planner: ../personas/planner.md
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
|
||||||
qa-reviewer: ../personas/qa-reviewer.md
|
|
||||||
supervisor: ../personas/supervisor.md
|
|
||||||
instructions:
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
review-arch: ../instructions/review-arch.md
|
|
||||||
review-qa: ../instructions/review-qa.md
|
|
||||||
fix: ../instructions/fix.md
|
|
||||||
initial_movement: review
|
initial_movement: review
|
||||||
loop_monitors:
|
loop_monitors:
|
||||||
- cycle:
|
- cycle:
|
||||||
@ -447,9 +428,3 @@ movements:
|
|||||||
output_contracts:
|
output_contracts:
|
||||||
report:
|
report:
|
||||||
- name: 07-progress.md
|
- name: 07-progress.md
|
||||||
report_formats:
|
|
||||||
plan: ../output-contracts/plan.md
|
|
||||||
architecture-review: ../output-contracts/architecture-review.md
|
|
||||||
qa-review: ../output-contracts/qa-review.md
|
|
||||||
validation: ../output-contracts/validation.md
|
|
||||||
summary: ../output-contracts/summary.md
|
|
||||||
|
|||||||
@ -1,31 +1,6 @@
|
|||||||
name: unit-test
|
name: unit-test
|
||||||
description: 単体テスト追加に特化したピース(テスト分析→テスト実装→レビュー→修正)
|
description: 単体テスト追加に特化したピース(テスト分析→テスト実装→レビュー→修正)
|
||||||
max_iterations: 20
|
max_iterations: 20
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
testing: ../policies/testing.md
|
|
||||||
ai-antipattern: ../policies/ai-antipattern.md
|
|
||||||
qa: ../policies/qa.md
|
|
||||||
knowledge:
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
backend: ../knowledge/backend.md
|
|
||||||
personas:
|
|
||||||
test-planner: ../personas/test-planner.md
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
|
||||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
|
||||||
qa-reviewer: ../personas/qa-reviewer.md
|
|
||||||
supervisor: ../personas/supervisor.md
|
|
||||||
instructions:
|
|
||||||
plan-test: ../instructions/plan-test.md
|
|
||||||
implement-test: ../instructions/implement-test.md
|
|
||||||
ai-review: ../instructions/ai-review.md
|
|
||||||
ai-fix: ../instructions/ai-fix.md
|
|
||||||
arbitrate: ../instructions/arbitrate.md
|
|
||||||
review-test: ../instructions/review-test.md
|
|
||||||
fix: ../instructions/fix.md
|
|
||||||
supervise: ../instructions/supervise.md
|
|
||||||
initial_movement: plan_test
|
initial_movement: plan_test
|
||||||
loop_monitors:
|
loop_monitors:
|
||||||
- cycle:
|
- cycle:
|
||||||
@ -259,9 +234,3 @@ movements:
|
|||||||
report:
|
report:
|
||||||
- Validation: 06-supervisor-validation.md
|
- Validation: 06-supervisor-validation.md
|
||||||
- Summary: summary.md
|
- Summary: summary.md
|
||||||
report_formats:
|
|
||||||
test-plan: ../output-contracts/test-plan.md
|
|
||||||
ai-review: ../output-contracts/ai-review.md
|
|
||||||
qa-review: ../output-contracts/qa-review.md
|
|
||||||
validation: ../output-contracts/validation.md
|
|
||||||
summary: ../output-contracts/summary.md
|
|
||||||
|
|||||||
@ -98,6 +98,36 @@
|
|||||||
- `finding_id` のない指摘は無効(判定根拠として扱わない)
|
- `finding_id` のない指摘は無効(判定根拠として扱わない)
|
||||||
- REJECTは `new` または `persists` の問題が1件以上ある場合のみ許可する
|
- REJECTは `new` または `persists` の問題が1件以上ある場合のみ許可する
|
||||||
|
|
||||||
|
## 再オープン条件(resolved → open)
|
||||||
|
|
||||||
|
解消済み指摘を再オープンする場合は、再現可能な根拠を必須とする。
|
||||||
|
|
||||||
|
- 前回 `resolved` の指摘を再オープンする場合、以下3点を必須で提示する
|
||||||
|
1. 再現手順(コマンド/入力)
|
||||||
|
2. 期待結果と実結果
|
||||||
|
3. 失敗箇所のファイル/行
|
||||||
|
- 上記3点が欠ける再オープンは無効(REJECT根拠に使わない)
|
||||||
|
- 再現手順が変わる場合は別問題として新規 `finding_id` を発行する
|
||||||
|
|
||||||
|
## finding_id の意味固定
|
||||||
|
|
||||||
|
同じ ID に別問題を混在させない。
|
||||||
|
|
||||||
|
- 同一 `finding_id` は同一問題にのみ使用する
|
||||||
|
- 問題の意味・根拠ファイル・再現条件が変わる場合は新規 `finding_id` を発行する
|
||||||
|
- 同一 `finding_id` の説明を後から別問題に差し替えることを禁止する
|
||||||
|
|
||||||
|
## テストファイルの扱い
|
||||||
|
|
||||||
|
テストファイルの長さや重複は、原則として保守性の警告として扱う。
|
||||||
|
|
||||||
|
- テストファイルの行数超過・重複コードは原則 `Warning`
|
||||||
|
- 以下の実害が再現できる場合のみ `REJECT` 可能
|
||||||
|
- テスト不安定化(フレーク)
|
||||||
|
- 誤検知/検知漏れ
|
||||||
|
- 回帰検出不能
|
||||||
|
- 「長すぎる」「重複がある」だけでは `REJECT` しない
|
||||||
|
|
||||||
## ボーイスカウトルール
|
## ボーイスカウトルール
|
||||||
|
|
||||||
来たときよりも美しく。
|
来たときよりも美しく。
|
||||||
|
|||||||
@ -55,14 +55,14 @@ movement 内では**キー名**で参照する(パスを直接書かない)
|
|||||||
session: refresh # セッション管理(任意)
|
session: refresh # セッション管理(任意)
|
||||||
pass_previous_response: true # 前の出力を渡すか(デフォルト: true)
|
pass_previous_response: true # 前の出力を渡すか(デフォルト: true)
|
||||||
allowed_tools: [...] # 許可ツール一覧(任意、参考情報)
|
allowed_tools: [...] # 許可ツール一覧(任意、参考情報)
|
||||||
instruction_template: | # インライン指示テンプレート(instruction キーの代替、任意)
|
instruction_template: | # 指示テンプレート(参照解決またはインライン、任意)
|
||||||
指示内容...
|
指示内容...
|
||||||
output_contracts: [...] # 出力契約設定(任意)
|
output_contracts: [...] # 出力契約設定(任意)
|
||||||
quality_gates: [...] # 品質ゲート(AIへの指示、任意)
|
quality_gates: [...] # 品質ゲート(AIへの指示、任意)
|
||||||
rules: [...] # 遷移ルール(必須)
|
rules: [...] # 遷移ルール(必須)
|
||||||
```
|
```
|
||||||
|
|
||||||
**`instruction` vs `instruction_template`**: `instruction` はトップレベル `instructions:` セクションのキー参照。`instruction_template` はインラインで指示を記述。どちらか一方を使用する。
|
**`instruction` vs `instruction_template`**: どちらも同じ参照解決ルート(セクションマップ → パス → 3-layer facet → インライン)を使う。`instruction_template` はインライン文字列もそのまま使える。通常はどちらか一方を使用する。
|
||||||
|
|
||||||
### Parallel Movement
|
### Parallel Movement
|
||||||
|
|
||||||
|
|||||||
@ -465,6 +465,7 @@ TAKTには複数のビルトインピースが同梱されています:
|
|||||||
| `review-only` | 変更を加えない読み取り専用のコードレビューピース。 |
|
| `review-only` | 変更を加えない読み取り専用のコードレビューピース。 |
|
||||||
| `structural-reform` | プロジェクト全体の構造改革: 段階的なファイル分割を伴う反復的なコードベース再構成。 |
|
| `structural-reform` | プロジェクト全体の構造改革: 段階的なファイル分割を伴う反復的なコードベース再構成。 |
|
||||||
| `unit-test` | ユニットテスト重視ピース: テスト分析 → テスト実装 → レビュー → 修正。 |
|
| `unit-test` | ユニットテスト重視ピース: テスト分析 → テスト実装 → レビュー → 修正。 |
|
||||||
|
| `e2e-test` | E2Eテスト重視ピース: E2E分析 → E2E実装 → レビュー → 修正(VitestベースのE2Eフロー)。 |
|
||||||
|
|
||||||
**ペルソナ別プロバイダー設定:** 設定ファイルの `persona_providers` で、特定のペルソナを異なるプロバイダーにルーティングできます(例: coder は Codex、レビュアーは Claude)。ピースを複製する必要はありません。
|
**ペルソナ別プロバイダー設定:** 設定ファイルの `persona_providers` で、特定のペルソナを異なるプロバイダーにルーティングできます(例: coder は Codex、レビュアーは Claude)。ピースを複製する必要はありません。
|
||||||
|
|
||||||
|
|||||||
@ -327,52 +327,33 @@ Faceted Promptingの中核メカニズムは**宣言的な合成**である。
|
|||||||
|
|
||||||
### TAKTでの実装例
|
### TAKTでの実装例
|
||||||
|
|
||||||
[TAKT](https://github.com/nrslib/takt) はFaceted PromptingをYAMLベースのワークフロー定義(「ピース」と呼ぶ)で実装している。各関心はセクションマップで短いキーにマッピングされ、各ステップ(TAKTでは「ムーブメント」と呼ぶ)からキーで参照される。
|
[TAKT](https://github.com/nrslib/takt) はFaceted PromptingをYAMLベースのワークフロー定義(「ピース」と呼ぶ)で実装している。builtinの各ファセットは、各ステップ(TAKTでは「ムーブメント」と呼ぶ)から bare name で直接参照できる。セクションマップは「名前とファイル名が異なる」場合のカスタムエイリアス用途でのみ任意で使う。
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
name: my-workflow
|
name: my-workflow
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
|
|
||||||
# セクションマップ — キー: ファイルパス(このYAMLからの相対パス)
|
|
||||||
personas:
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
reviewer: ../personas/architecture-reviewer.md
|
|
||||||
|
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
|
|
||||||
instructions:
|
|
||||||
plan: ../instructions/plan.md
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
|
|
||||||
knowledge:
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
|
|
||||||
report_formats:
|
|
||||||
review: ../output-contracts/review.md
|
|
||||||
|
|
||||||
movements:
|
movements:
|
||||||
- name: implement
|
- name: implement
|
||||||
persona: coder # WHO — personas.coder を参照
|
persona: coder # WHO — builtins/{lang}/personas/coder.md
|
||||||
policy: coding # RULES — policies.coding を参照
|
policy: coding # RULES — builtins/{lang}/policies/coding.md
|
||||||
instruction: implement # WHAT — instructions.implement を参照
|
instruction: implement # WHAT — builtins/{lang}/instructions/implement.md
|
||||||
knowledge: architecture # CONTEXT — knowledge.architecture を参照
|
knowledge: architecture # CONTEXT — builtins/{lang}/knowledge/architecture.md
|
||||||
edit: true
|
edit: true
|
||||||
rules:
|
rules:
|
||||||
- condition: Implementation complete
|
- condition: Implementation complete
|
||||||
next: review
|
next: review
|
||||||
|
|
||||||
- name: review
|
- name: review
|
||||||
persona: reviewer # 異なる WHO
|
persona: architecture-reviewer # 異なる WHO
|
||||||
policy: review # 異なる RULES
|
policy: review # 異なる RULES
|
||||||
instruction: review # 異なる WHAT(共有も可能)
|
instruction: review # 異なる WHAT(共有も可能)
|
||||||
knowledge: architecture # 同じ CONTEXT — 再利用
|
knowledge: architecture # 同じ CONTEXT — 再利用
|
||||||
output_contracts:
|
output_contracts:
|
||||||
report:
|
report:
|
||||||
- name: review.md
|
- name: review.md
|
||||||
format: review # OUTPUT — report_formats.review を参照
|
format: architecture-review # OUTPUT — builtins/{lang}/output-contracts/architecture-review.md
|
||||||
edit: false
|
edit: false
|
||||||
rules:
|
rules:
|
||||||
- condition: Approved
|
- condition: Approved
|
||||||
|
|||||||
@ -327,52 +327,33 @@ Key properties:
|
|||||||
|
|
||||||
### Implementation Example: TAKT
|
### Implementation Example: TAKT
|
||||||
|
|
||||||
[TAKT](https://github.com/nrslib/takt) implements Faceted Prompting using YAML-based workflow definitions called "pieces." Concerns are mapped to short keys via section maps, then referenced by key in each step (called "movement" in TAKT):
|
[TAKT](https://github.com/nrslib/takt) implements Faceted Prompting using YAML-based workflow definitions called "pieces." Builtin facets can be referenced directly by bare name in each step (called "movement" in TAKT). Section maps are optional and only needed for custom aliases (name differs from file name):
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
name: my-workflow
|
name: my-workflow
|
||||||
max_iterations: 10
|
max_iterations: 10
|
||||||
initial_movement: plan
|
initial_movement: plan
|
||||||
|
|
||||||
# Section maps — key: file path (relative to this YAML)
|
|
||||||
personas:
|
|
||||||
coder: ../personas/coder.md
|
|
||||||
reviewer: ../personas/architecture-reviewer.md
|
|
||||||
|
|
||||||
policies:
|
|
||||||
coding: ../policies/coding.md
|
|
||||||
review: ../policies/review.md
|
|
||||||
|
|
||||||
instructions:
|
|
||||||
plan: ../instructions/plan.md
|
|
||||||
implement: ../instructions/implement.md
|
|
||||||
|
|
||||||
knowledge:
|
|
||||||
architecture: ../knowledge/architecture.md
|
|
||||||
|
|
||||||
output_contracts:
|
|
||||||
review: ../output-contracts/review.md
|
|
||||||
|
|
||||||
movements:
|
movements:
|
||||||
- name: implement
|
- name: implement
|
||||||
persona: coder # WHO — references personas.coder
|
persona: coder # WHO — builtins/{lang}/personas/coder.md
|
||||||
policy: coding # RULES — references policies.coding
|
policy: coding # RULES — builtins/{lang}/policies/coding.md
|
||||||
instruction: implement # WHAT — references instructions.implement
|
instruction: implement # WHAT — builtins/{lang}/instructions/implement.md
|
||||||
knowledge: architecture # CONTEXT — references knowledge.architecture
|
knowledge: architecture # CONTEXT — builtins/{lang}/knowledge/architecture.md
|
||||||
edit: true
|
edit: true
|
||||||
rules:
|
rules:
|
||||||
- condition: Implementation complete
|
- condition: Implementation complete
|
||||||
next: review
|
next: review
|
||||||
|
|
||||||
- name: review
|
- name: review
|
||||||
persona: reviewer # Different WHO
|
persona: architecture-reviewer # Different WHO
|
||||||
policy: review # Different RULES
|
policy: review # Different RULES
|
||||||
instruction: review # Different WHAT (but could share)
|
instruction: review # Different WHAT (but could share)
|
||||||
knowledge: architecture # Same CONTEXT — reused
|
knowledge: architecture # Same CONTEXT — reused
|
||||||
output_contracts:
|
output_contracts:
|
||||||
report:
|
report:
|
||||||
- name: review.md
|
- name: review.md
|
||||||
format: review # OUTPUT — references report_formats.review
|
format: architecture-review # OUTPUT — builtins/{lang}/output-contracts/architecture-review.md
|
||||||
edit: false
|
edit: false
|
||||||
rules:
|
rules:
|
||||||
- condition: Approved
|
- condition: Approved
|
||||||
|
|||||||
@ -92,6 +92,16 @@ E2Eテストを追加・変更した場合は、このドキュメントも更
|
|||||||
- `.takt/tasks/` にタスクYAMLを追加する(`piece` に `e2e/fixtures/pieces/mock-single-step.yaml` を指定)。
|
- `.takt/tasks/` にタスクYAMLを追加する(`piece` に `e2e/fixtures/pieces/mock-single-step.yaml` を指定)。
|
||||||
- 出力に `Task "watch-task" completed` が含まれることを確認する。
|
- 出力に `Task "watch-task" completed` が含まれることを確認する。
|
||||||
- `Ctrl+C` で終了する。
|
- `Ctrl+C` で終了する。
|
||||||
|
- Run tasks graceful shutdown on SIGINT(`e2e/specs/run-sigint-graceful.e2e.ts`)
|
||||||
|
- 目的: `takt run` を並列実行中に `Ctrl+C` した際、新規クローン投入を止めてグレースフルに終了することを確認。
|
||||||
|
- LLM: 呼び出さない(`--provider mock` 固定)
|
||||||
|
- 手順(ユーザー行動/コマンド):
|
||||||
|
- `.takt/tasks.yaml` に `worktree: true` の pending タスクを3件投入する(`concurrency: 2`)。
|
||||||
|
- 各タスクの `piece` に `e2e/fixtures/pieces/mock-slow-multi-step.yaml` を指定する。
|
||||||
|
- `TAKT_MOCK_SCENARIO=e2e/fixtures/scenarios/run-sigint-parallel.json` を設定する。
|
||||||
|
- `takt run --provider mock` を起動し、`=== Running Piece:` が出たら `Ctrl+C` を送る。
|
||||||
|
- 3件目タスク(`sigint-c`)が開始されないことを確認する。
|
||||||
|
- `=== Tasks Summary ===` 以降に新規タスク開始やクローン作成ログが出ないことを確認する。
|
||||||
- List tasks non-interactive(`e2e/specs/list-non-interactive.e2e.ts`)
|
- List tasks non-interactive(`e2e/specs/list-non-interactive.e2e.ts`)
|
||||||
- 目的: `takt list` の非対話モードでブランチ操作ができることを確認。
|
- 目的: `takt list` の非対話モードでブランチ操作ができることを確認。
|
||||||
- LLM: 呼び出さない(LLM不使用の操作のみ)
|
- LLM: 呼び出さない(LLM不使用の操作のみ)
|
||||||
|
|||||||
79
e2e/fixtures/pieces/mock-slow-multi-step.yaml
Normal file
79
e2e/fixtures/pieces/mock-slow-multi-step.yaml
Normal file
@ -0,0 +1,79 @@
|
|||||||
|
name: e2e-mock-slow-multi-step
|
||||||
|
description: Multi-step mock piece to keep tasks in-flight long enough for SIGINT E2E
|
||||||
|
|
||||||
|
max_iterations: 20
|
||||||
|
|
||||||
|
initial_movement: step-1
|
||||||
|
|
||||||
|
movements:
|
||||||
|
- name: step-1
|
||||||
|
edit: true
|
||||||
|
persona: ../agents/test-coder.md
|
||||||
|
instruction_template: |
|
||||||
|
{task}
|
||||||
|
rules:
|
||||||
|
- condition: Done
|
||||||
|
next: step-2
|
||||||
|
|
||||||
|
- name: step-2
|
||||||
|
edit: true
|
||||||
|
persona: ../agents/test-coder.md
|
||||||
|
instruction_template: |
|
||||||
|
Continue task execution.
|
||||||
|
rules:
|
||||||
|
- condition: Done
|
||||||
|
next: step-3
|
||||||
|
|
||||||
|
- name: step-3
|
||||||
|
edit: true
|
||||||
|
persona: ../agents/test-coder.md
|
||||||
|
instruction_template: |
|
||||||
|
Continue task execution.
|
||||||
|
rules:
|
||||||
|
- condition: Done
|
||||||
|
next: step-4
|
||||||
|
|
||||||
|
- name: step-4
|
||||||
|
edit: true
|
||||||
|
persona: ../agents/test-coder.md
|
||||||
|
instruction_template: |
|
||||||
|
Continue task execution.
|
||||||
|
rules:
|
||||||
|
- condition: Done
|
||||||
|
next: step-5
|
||||||
|
|
||||||
|
- name: step-5
|
||||||
|
edit: true
|
||||||
|
persona: ../agents/test-coder.md
|
||||||
|
instruction_template: |
|
||||||
|
Continue task execution.
|
||||||
|
rules:
|
||||||
|
- condition: Done
|
||||||
|
next: step-6
|
||||||
|
|
||||||
|
- name: step-6
|
||||||
|
edit: true
|
||||||
|
persona: ../agents/test-coder.md
|
||||||
|
instruction_template: |
|
||||||
|
Continue task execution.
|
||||||
|
rules:
|
||||||
|
- condition: Done
|
||||||
|
next: step-7
|
||||||
|
|
||||||
|
- name: step-7
|
||||||
|
edit: true
|
||||||
|
persona: ../agents/test-coder.md
|
||||||
|
instruction_template: |
|
||||||
|
Continue task execution.
|
||||||
|
rules:
|
||||||
|
- condition: Done
|
||||||
|
next: step-8
|
||||||
|
|
||||||
|
- name: step-8
|
||||||
|
edit: true
|
||||||
|
persona: ../agents/test-coder.md
|
||||||
|
instruction_template: |
|
||||||
|
Finalize task execution.
|
||||||
|
rules:
|
||||||
|
- condition: Done
|
||||||
|
next: COMPLETE
|
||||||
32
e2e/fixtures/scenarios/run-sigint-parallel.json
Normal file
32
e2e/fixtures/scenarios/run-sigint-parallel.json
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
[
|
||||||
|
{
|
||||||
|
"persona": "summarizer",
|
||||||
|
"status": "done",
|
||||||
|
"content": "sigint-a"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"persona": "summarizer",
|
||||||
|
"status": "done",
|
||||||
|
"content": "sigint-b"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"persona": "summarizer",
|
||||||
|
"status": "done",
|
||||||
|
"content": "sigint-c"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"persona": "test-coder",
|
||||||
|
"status": "done",
|
||||||
|
"content": "[EXECUTE:1]\n\nDone"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"persona": "test-coder",
|
||||||
|
"status": "done",
|
||||||
|
"content": "[EXECUTE:1]\n\nDone"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"persona": "test-coder",
|
||||||
|
"status": "done",
|
||||||
|
"content": "[EXECUTE:1]\n\nDone"
|
||||||
|
}
|
||||||
|
]
|
||||||
@ -2,6 +2,7 @@ import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
|||||||
import { readFileSync, existsSync, mkdirSync, writeFileSync } from 'node:fs';
|
import { readFileSync, existsSync, mkdirSync, writeFileSync } from 'node:fs';
|
||||||
import { join, resolve, dirname } from 'node:path';
|
import { join, resolve, dirname } from 'node:path';
|
||||||
import { fileURLToPath } from 'node:url';
|
import { fileURLToPath } from 'node:url';
|
||||||
|
import { parse as parseYaml } from 'yaml';
|
||||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||||
import { createTestRepo, type TestRepo } from '../helpers/test-repo';
|
import { createTestRepo, type TestRepo } from '../helpers/test-repo';
|
||||||
import { runTakt } from '../helpers/takt-runner';
|
import { runTakt } from '../helpers/takt-runner';
|
||||||
@ -35,15 +36,22 @@ describe('E2E: Add task and run (takt add → takt run)', () => {
|
|||||||
it('should add a task file and execute it with takt run', () => {
|
it('should add a task file and execute it with takt run', () => {
|
||||||
const piecePath = resolve(__dirname, '../fixtures/pieces/simple.yaml');
|
const piecePath = resolve(__dirname, '../fixtures/pieces/simple.yaml');
|
||||||
|
|
||||||
// Step 1: Create a task file in .takt/tasks/ (simulates `takt add`)
|
// Step 1: Create a pending task in .takt/tasks.yaml (simulates `takt add`)
|
||||||
const tasksDir = join(testRepo.path, '.takt', 'tasks');
|
const taktDir = join(testRepo.path, '.takt');
|
||||||
mkdirSync(tasksDir, { recursive: true });
|
mkdirSync(taktDir, { recursive: true });
|
||||||
|
const tasksFile = join(taktDir, 'tasks.yaml');
|
||||||
|
|
||||||
const taskYaml = [
|
const taskYaml = [
|
||||||
'task: "Add a single line \\"E2E test passed\\" to README.md"',
|
'tasks:',
|
||||||
`piece: "${piecePath}"`,
|
' - name: e2e-test-task',
|
||||||
|
' status: pending',
|
||||||
|
' content: "Add a single line \\"E2E test passed\\" to README.md"',
|
||||||
|
` piece: "${piecePath}"`,
|
||||||
|
` created_at: "${new Date().toISOString()}"`,
|
||||||
|
' started_at: null',
|
||||||
|
' completed_at: null',
|
||||||
].join('\n');
|
].join('\n');
|
||||||
writeFileSync(join(tasksDir, 'e2e-test-task.yaml'), taskYaml, 'utf-8');
|
writeFileSync(tasksFile, taskYaml, 'utf-8');
|
||||||
|
|
||||||
// Step 2: Run `takt run` to execute the pending task
|
// Step 2: Run `takt run` to execute the pending task
|
||||||
const result = runTakt({
|
const result = runTakt({
|
||||||
@ -66,7 +74,10 @@ describe('E2E: Add task and run (takt add → takt run)', () => {
|
|||||||
const readme = readFileSync(readmePath, 'utf-8');
|
const readme = readFileSync(readmePath, 'utf-8');
|
||||||
expect(readme).toContain('E2E test passed');
|
expect(readme).toContain('E2E test passed');
|
||||||
|
|
||||||
// Verify task file was moved out of tasks/ (completed or failed)
|
// Verify task status became completed
|
||||||
expect(existsSync(join(tasksDir, 'e2e-test-task.yaml'))).toBe(false);
|
const tasksRaw = readFileSync(tasksFile, 'utf-8');
|
||||||
|
const parsed = parseYaml(tasksRaw) as { tasks?: Array<{ name?: string; status?: string }> };
|
||||||
|
const executed = parsed.tasks?.find((task) => task.name === 'e2e-test-task');
|
||||||
|
expect(executed?.status).toBe('completed');
|
||||||
}, 240_000);
|
}, 240_000);
|
||||||
});
|
});
|
||||||
|
|||||||
@ -1,8 +1,9 @@
|
|||||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
import { execFileSync } from 'node:child_process';
|
import { execFileSync } from 'node:child_process';
|
||||||
import { readFileSync, readdirSync, writeFileSync } from 'node:fs';
|
import { readFileSync, writeFileSync } from 'node:fs';
|
||||||
import { join, dirname, resolve } from 'node:path';
|
import { join, dirname, resolve } from 'node:path';
|
||||||
import { fileURLToPath } from 'node:url';
|
import { fileURLToPath } from 'node:url';
|
||||||
|
import { parse as parseYaml } from 'yaml';
|
||||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||||
import { createTestRepo, type TestRepo } from '../helpers/test-repo';
|
import { createTestRepo, type TestRepo } from '../helpers/test-repo';
|
||||||
import { runTakt } from '../helpers/takt-runner';
|
import { runTakt } from '../helpers/takt-runner';
|
||||||
@ -84,12 +85,10 @@ describe('E2E: Add task from GitHub issue (takt add)', () => {
|
|||||||
|
|
||||||
expect(result.exitCode).toBe(0);
|
expect(result.exitCode).toBe(0);
|
||||||
|
|
||||||
const tasksDir = join(testRepo.path, '.takt', 'tasks');
|
const tasksFile = join(testRepo.path, '.takt', 'tasks.yaml');
|
||||||
const files = readdirSync(tasksDir).filter((file) => file.endsWith('.yaml'));
|
const content = readFileSync(tasksFile, 'utf-8');
|
||||||
expect(files.length).toBe(1);
|
const parsed = parseYaml(content) as { tasks?: Array<{ issue?: number }> };
|
||||||
|
expect(parsed.tasks?.length).toBe(1);
|
||||||
const taskFile = join(tasksDir, files[0] ?? '');
|
expect(parsed.tasks?.[0]?.issue).toBe(Number(issueNumber));
|
||||||
const content = readFileSync(taskFile, 'utf-8');
|
|
||||||
expect(content).toContain('issue:');
|
|
||||||
}, 240_000);
|
}, 240_000);
|
||||||
});
|
});
|
||||||
|
|||||||
176
e2e/specs/run-sigint-graceful.e2e.ts
Normal file
176
e2e/specs/run-sigint-graceful.e2e.ts
Normal file
@ -0,0 +1,176 @@
|
|||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { spawn } from 'node:child_process';
|
||||||
|
import { mkdirSync, writeFileSync, readFileSync } from 'node:fs';
|
||||||
|
import { join, resolve, dirname } from 'node:path';
|
||||||
|
import { fileURLToPath } from 'node:url';
|
||||||
|
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||||
|
import { createTestRepo, type TestRepo } from '../helpers/test-repo';
|
||||||
|
|
||||||
|
const __filename = fileURLToPath(import.meta.url);
|
||||||
|
const __dirname = dirname(__filename);
|
||||||
|
|
||||||
|
async function waitFor(
|
||||||
|
predicate: () => boolean,
|
||||||
|
timeoutMs: number,
|
||||||
|
intervalMs: number = 100,
|
||||||
|
): Promise<boolean> {
|
||||||
|
const startedAt = Date.now();
|
||||||
|
while (Date.now() - startedAt < timeoutMs) {
|
||||||
|
if (predicate()) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
await new Promise((resolvePromise) => setTimeout(resolvePromise, intervalMs));
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function waitForClose(
|
||||||
|
child: ReturnType<typeof spawn>,
|
||||||
|
timeoutMs: number,
|
||||||
|
): Promise<{ code: number | null; signal: NodeJS.Signals | null }> {
|
||||||
|
return await new Promise((resolvePromise, rejectPromise) => {
|
||||||
|
const timeout = setTimeout(() => {
|
||||||
|
child.kill('SIGKILL');
|
||||||
|
rejectPromise(new Error(`Process did not exit within ${timeoutMs}ms`));
|
||||||
|
}, timeoutMs);
|
||||||
|
|
||||||
|
child.on('close', (code, signal) => {
|
||||||
|
clearTimeout(timeout);
|
||||||
|
resolvePromise({ code, signal });
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// E2E更新時は docs/testing/e2e.md も更新すること
|
||||||
|
describe('E2E: Run tasks graceful shutdown on SIGINT (parallel)', () => {
|
||||||
|
let isolatedEnv: IsolatedEnv;
|
||||||
|
let testRepo: TestRepo;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
isolatedEnv = createIsolatedEnv();
|
||||||
|
testRepo = createTestRepo();
|
||||||
|
|
||||||
|
writeFileSync(
|
||||||
|
join(isolatedEnv.taktDir, 'config.yaml'),
|
||||||
|
[
|
||||||
|
'provider: mock',
|
||||||
|
'model: mock-model',
|
||||||
|
'language: en',
|
||||||
|
'log_level: info',
|
||||||
|
'default_piece: default',
|
||||||
|
'concurrency: 2',
|
||||||
|
'task_poll_interval_ms: 100',
|
||||||
|
].join('\n'),
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
try {
|
||||||
|
testRepo.cleanup();
|
||||||
|
} catch {
|
||||||
|
// best-effort
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
isolatedEnv.cleanup();
|
||||||
|
} catch {
|
||||||
|
// best-effort
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should stop scheduling new clone work after SIGINT and exit cleanly', async () => {
|
||||||
|
const binPath = resolve(__dirname, '../../bin/takt');
|
||||||
|
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-slow-multi-step.yaml');
|
||||||
|
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/run-sigint-parallel.json');
|
||||||
|
|
||||||
|
const tasksFile = join(testRepo.path, '.takt', 'tasks.yaml');
|
||||||
|
mkdirSync(join(testRepo.path, '.takt'), { recursive: true });
|
||||||
|
|
||||||
|
const now = new Date().toISOString();
|
||||||
|
writeFileSync(
|
||||||
|
tasksFile,
|
||||||
|
[
|
||||||
|
'tasks:',
|
||||||
|
' - name: sigint-a',
|
||||||
|
' status: pending',
|
||||||
|
' content: "E2E SIGINT task A"',
|
||||||
|
` piece: "${piecePath}"`,
|
||||||
|
' worktree: true',
|
||||||
|
` created_at: "${now}"`,
|
||||||
|
' started_at: null',
|
||||||
|
' completed_at: null',
|
||||||
|
' owner_pid: null',
|
||||||
|
' - name: sigint-b',
|
||||||
|
' status: pending',
|
||||||
|
' content: "E2E SIGINT task B"',
|
||||||
|
` piece: "${piecePath}"`,
|
||||||
|
' worktree: true',
|
||||||
|
` created_at: "${now}"`,
|
||||||
|
' started_at: null',
|
||||||
|
' completed_at: null',
|
||||||
|
' owner_pid: null',
|
||||||
|
' - name: sigint-c',
|
||||||
|
' status: pending',
|
||||||
|
' content: "E2E SIGINT task C"',
|
||||||
|
` piece: "${piecePath}"`,
|
||||||
|
' worktree: true',
|
||||||
|
` created_at: "${now}"`,
|
||||||
|
' started_at: null',
|
||||||
|
' completed_at: null',
|
||||||
|
' owner_pid: null',
|
||||||
|
].join('\n'),
|
||||||
|
'utf-8',
|
||||||
|
);
|
||||||
|
|
||||||
|
const child = spawn('node', [binPath, 'run', '--provider', 'mock'], {
|
||||||
|
cwd: testRepo.path,
|
||||||
|
env: {
|
||||||
|
...isolatedEnv.env,
|
||||||
|
TAKT_MOCK_SCENARIO: scenarioPath,
|
||||||
|
TAKT_E2E_SELF_SIGINT_ONCE: '1',
|
||||||
|
},
|
||||||
|
stdio: ['ignore', 'pipe', 'pipe'],
|
||||||
|
});
|
||||||
|
|
||||||
|
let stdout = '';
|
||||||
|
let stderr = '';
|
||||||
|
child.stdout?.on('data', (chunk) => {
|
||||||
|
stdout += chunk.toString();
|
||||||
|
});
|
||||||
|
child.stderr?.on('data', (chunk) => {
|
||||||
|
stderr += chunk.toString();
|
||||||
|
});
|
||||||
|
|
||||||
|
const workersFilled = await waitFor(
|
||||||
|
() => stdout.includes('=== Task: sigint-b ==='),
|
||||||
|
30_000,
|
||||||
|
20,
|
||||||
|
);
|
||||||
|
expect(workersFilled, `stdout:\n${stdout}\n\nstderr:\n${stderr}`).toBe(true);
|
||||||
|
|
||||||
|
const exit = await waitForClose(child, 60_000);
|
||||||
|
|
||||||
|
expect(
|
||||||
|
exit.signal === 'SIGINT' || exit.code === 130 || exit.code === 0,
|
||||||
|
`unexpected exit: code=${exit.code}, signal=${exit.signal}`,
|
||||||
|
).toBe(true);
|
||||||
|
expect(stdout).not.toContain('=== Task: sigint-c ===');
|
||||||
|
expect(stdout).not.toContain('Task "sigint-c" completed');
|
||||||
|
|
||||||
|
const summaryIndex = stdout.lastIndexOf('=== Tasks Summary ===');
|
||||||
|
expect(summaryIndex).toBeGreaterThan(-1);
|
||||||
|
|
||||||
|
const afterSummary = stdout.slice(summaryIndex);
|
||||||
|
expect(afterSummary).not.toContain('=== Task:');
|
||||||
|
expect(afterSummary).not.toContain('=== Running Piece:');
|
||||||
|
expect(afterSummary).not.toContain('Creating clone...');
|
||||||
|
|
||||||
|
const finalTasksYaml = readFileSync(tasksFile, 'utf-8');
|
||||||
|
expect(finalTasksYaml).toMatch(
|
||||||
|
/name: sigint-c[\s\S]*?status: pending/,
|
||||||
|
);
|
||||||
|
|
||||||
|
if (stderr.trim().length > 0) {
|
||||||
|
expect(stderr).not.toContain('UnhandledPromiseRejection');
|
||||||
|
}
|
||||||
|
}, 120_000);
|
||||||
|
});
|
||||||
@ -1,8 +1,9 @@
|
|||||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
import { spawn } from 'node:child_process';
|
import { spawn } from 'node:child_process';
|
||||||
import { mkdirSync, writeFileSync, existsSync } from 'node:fs';
|
import { mkdirSync, readFileSync, writeFileSync } from 'node:fs';
|
||||||
import { join, resolve, dirname } from 'node:path';
|
import { join, resolve, dirname } from 'node:path';
|
||||||
import { fileURLToPath } from 'node:url';
|
import { fileURLToPath } from 'node:url';
|
||||||
|
import { parse as parseYaml } from 'yaml';
|
||||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||||
import { createTestRepo, type TestRepo } from '../helpers/test-repo';
|
import { createTestRepo, type TestRepo } from '../helpers/test-repo';
|
||||||
|
|
||||||
@ -51,16 +52,21 @@ describe('E2E: Watch tasks (takt watch)', () => {
|
|||||||
stdout += chunk.toString();
|
stdout += chunk.toString();
|
||||||
});
|
});
|
||||||
|
|
||||||
const tasksDir = join(testRepo.path, '.takt', 'tasks');
|
const taktDir = join(testRepo.path, '.takt');
|
||||||
mkdirSync(tasksDir, { recursive: true });
|
mkdirSync(taktDir, { recursive: true });
|
||||||
|
const tasksFile = join(taktDir, 'tasks.yaml');
|
||||||
|
const createdAt = new Date().toISOString();
|
||||||
const taskYaml = [
|
const taskYaml = [
|
||||||
'task: "Add a single line \\\"watch test\\\" to README.md"',
|
'tasks:',
|
||||||
`piece: "${piecePath}"`,
|
' - name: watch-task',
|
||||||
|
' status: pending',
|
||||||
|
' content: "Add a single line \\"watch test\\" to README.md"',
|
||||||
|
` piece: "${piecePath}"`,
|
||||||
|
` created_at: "${createdAt}"`,
|
||||||
|
' started_at: null',
|
||||||
|
' completed_at: null',
|
||||||
].join('\n');
|
].join('\n');
|
||||||
|
writeFileSync(tasksFile, taskYaml, 'utf-8');
|
||||||
const taskPath = join(tasksDir, 'watch-task.yaml');
|
|
||||||
writeFileSync(taskPath, taskYaml, 'utf-8');
|
|
||||||
|
|
||||||
const completed = await new Promise<boolean>((resolvePromise) => {
|
const completed = await new Promise<boolean>((resolvePromise) => {
|
||||||
const timeout = setTimeout(() => resolvePromise(false), 240_000);
|
const timeout = setTimeout(() => resolvePromise(false), 240_000);
|
||||||
@ -87,6 +93,9 @@ describe('E2E: Watch tasks (takt watch)', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
expect(completed).toBe(true);
|
expect(completed).toBe(true);
|
||||||
expect(existsSync(taskPath)).toBe(false);
|
const tasksRaw = readFileSync(tasksFile, 'utf-8');
|
||||||
|
const parsed = parseYaml(tasksRaw) as { tasks?: Array<{ name?: string; status?: string }> };
|
||||||
|
const watchTask = parsed.tasks?.find((task) => task.name === 'watch-task');
|
||||||
|
expect(watchTask?.status).toBe('completed');
|
||||||
}, 240_000);
|
}, 240_000);
|
||||||
});
|
});
|
||||||
|
|||||||
4
package-lock.json
generated
4
package-lock.json
generated
@ -1,12 +1,12 @@
|
|||||||
{
|
{
|
||||||
"name": "takt",
|
"name": "takt",
|
||||||
"version": "0.10.0",
|
"version": "0.11.0",
|
||||||
"lockfileVersion": 3,
|
"lockfileVersion": 3,
|
||||||
"requires": true,
|
"requires": true,
|
||||||
"packages": {
|
"packages": {
|
||||||
"": {
|
"": {
|
||||||
"name": "takt",
|
"name": "takt",
|
||||||
"version": "0.10.0",
|
"version": "0.11.0",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@anthropic-ai/claude-agent-sdk": "^0.2.37",
|
"@anthropic-ai/claude-agent-sdk": "^0.2.37",
|
||||||
|
|||||||
@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "takt",
|
"name": "takt",
|
||||||
"version": "0.10.0",
|
"version": "0.11.0",
|
||||||
"description": "TAKT: Task Agent Koordination Tool - AI Agent Piece Orchestration",
|
"description": "TAKT: Task Agent Koordination Tool - AI Agent Piece Orchestration",
|
||||||
"main": "dist/index.js",
|
"main": "dist/index.js",
|
||||||
"types": "dist/index.d.ts",
|
"types": "dist/index.d.ts",
|
||||||
|
|||||||
@ -1,21 +1,13 @@
|
|||||||
/**
|
|
||||||
* Tests for addTask command
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||||
import * as fs from 'node:fs';
|
import * as fs from 'node:fs';
|
||||||
import * as path from 'node:path';
|
import * as path from 'node:path';
|
||||||
import { tmpdir } from 'node:os';
|
import { tmpdir } from 'node:os';
|
||||||
|
import { parse as parseYaml } from 'yaml';
|
||||||
|
|
||||||
// Mock dependencies before importing the module under test
|
|
||||||
vi.mock('../features/interactive/index.js', () => ({
|
vi.mock('../features/interactive/index.js', () => ({
|
||||||
interactiveMode: vi.fn(),
|
interactiveMode: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../infra/providers/index.js', () => ({
|
|
||||||
getProvider: vi.fn(),
|
|
||||||
}));
|
|
||||||
|
|
||||||
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
||||||
loadGlobalConfig: vi.fn(() => ({ provider: 'claude' })),
|
loadGlobalConfig: vi.fn(() => ({ provider: 'claude' })),
|
||||||
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
||||||
@ -26,14 +18,11 @@ vi.mock('../shared/prompt/index.js', () => ({
|
|||||||
confirm: vi.fn(),
|
confirm: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../infra/task/summarize.js', () => ({
|
|
||||||
summarizeTaskName: vi.fn(),
|
|
||||||
}));
|
|
||||||
|
|
||||||
vi.mock('../shared/ui/index.js', () => ({
|
vi.mock('../shared/ui/index.js', () => ({
|
||||||
success: vi.fn(),
|
success: vi.fn(),
|
||||||
info: vi.fn(),
|
info: vi.fn(),
|
||||||
blankLine: vi.fn(),
|
blankLine: vi.fn(),
|
||||||
|
error: vi.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
||||||
@ -76,42 +65,27 @@ vi.mock('../infra/github/issue.js', () => ({
|
|||||||
|
|
||||||
import { interactiveMode } from '../features/interactive/index.js';
|
import { interactiveMode } from '../features/interactive/index.js';
|
||||||
import { promptInput, confirm } from '../shared/prompt/index.js';
|
import { promptInput, confirm } from '../shared/prompt/index.js';
|
||||||
import { summarizeTaskName } from '../infra/task/summarize.js';
|
|
||||||
import { determinePiece } from '../features/tasks/execute/selectAndExecute.js';
|
import { determinePiece } from '../features/tasks/execute/selectAndExecute.js';
|
||||||
import { getPieceDescription } from '../infra/config/loaders/pieceResolver.js';
|
import { resolveIssueTask } from '../infra/github/issue.js';
|
||||||
import { resolveIssueTask, createIssue } from '../infra/github/issue.js';
|
|
||||||
import { addTask } from '../features/tasks/index.js';
|
import { addTask } from '../features/tasks/index.js';
|
||||||
|
|
||||||
const mockResolveIssueTask = vi.mocked(resolveIssueTask);
|
const mockResolveIssueTask = vi.mocked(resolveIssueTask);
|
||||||
const mockCreateIssue = vi.mocked(createIssue);
|
|
||||||
const mockInteractiveMode = vi.mocked(interactiveMode);
|
const mockInteractiveMode = vi.mocked(interactiveMode);
|
||||||
const mockPromptInput = vi.mocked(promptInput);
|
const mockPromptInput = vi.mocked(promptInput);
|
||||||
const mockConfirm = vi.mocked(confirm);
|
const mockConfirm = vi.mocked(confirm);
|
||||||
const mockSummarizeTaskName = vi.mocked(summarizeTaskName);
|
|
||||||
const mockDeterminePiece = vi.mocked(determinePiece);
|
const mockDeterminePiece = vi.mocked(determinePiece);
|
||||||
const mockGetPieceDescription = vi.mocked(getPieceDescription);
|
|
||||||
|
|
||||||
function setupFullFlowMocks(overrides?: {
|
|
||||||
task?: string;
|
|
||||||
slug?: string;
|
|
||||||
}) {
|
|
||||||
const task = overrides?.task ?? '# 認証機能追加\nJWT認証を実装する';
|
|
||||||
const slug = overrides?.slug ?? 'add-auth';
|
|
||||||
|
|
||||||
mockDeterminePiece.mockResolvedValue('default');
|
|
||||||
mockGetPieceDescription.mockReturnValue({ name: 'default', description: '', pieceStructure: '' });
|
|
||||||
mockInteractiveMode.mockResolvedValue({ action: 'execute', task });
|
|
||||||
mockSummarizeTaskName.mockResolvedValue(slug);
|
|
||||||
mockConfirm.mockResolvedValue(false);
|
|
||||||
}
|
|
||||||
|
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
|
function loadTasks(dir: string): { tasks: Array<Record<string, unknown>> } {
|
||||||
|
const raw = fs.readFileSync(path.join(dir, '.takt', 'tasks.yaml'), 'utf-8');
|
||||||
|
return parseYaml(raw) as { tasks: Array<Record<string, unknown>> };
|
||||||
|
}
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.clearAllMocks();
|
vi.clearAllMocks();
|
||||||
testDir = fs.mkdtempSync(path.join(tmpdir(), 'takt-test-'));
|
testDir = fs.mkdtempSync(path.join(tmpdir(), 'takt-test-'));
|
||||||
mockDeterminePiece.mockResolvedValue('default');
|
mockDeterminePiece.mockResolvedValue('default');
|
||||||
mockGetPieceDescription.mockReturnValue({ name: 'default', description: '', pieceStructure: '' });
|
|
||||||
mockConfirm.mockResolvedValue(false);
|
mockConfirm.mockResolvedValue(false);
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -122,332 +96,46 @@ afterEach(() => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
describe('addTask', () => {
|
describe('addTask', () => {
|
||||||
it('should cancel when interactive mode is not confirmed', async () => {
|
it('should create task entry from interactive result', async () => {
|
||||||
// Given: user cancels interactive mode
|
mockInteractiveMode.mockResolvedValue({ action: 'execute', task: '# 認証機能追加\nJWT認証を実装する' });
|
||||||
mockDeterminePiece.mockResolvedValue('default');
|
|
||||||
mockInteractiveMode.mockResolvedValue({ action: 'cancel', task: '' });
|
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir);
|
await addTask(testDir);
|
||||||
|
|
||||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
const tasks = loadTasks(testDir).tasks;
|
||||||
const files = fs.existsSync(tasksDir) ? fs.readdirSync(tasksDir) : [];
|
expect(tasks).toHaveLength(1);
|
||||||
expect(files.length).toBe(0);
|
expect(tasks[0]?.content).toContain('JWT認証を実装する');
|
||||||
expect(mockSummarizeTaskName).not.toHaveBeenCalled();
|
expect(tasks[0]?.piece).toBe('default');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should create task file with AI-summarized content', async () => {
|
it('should include worktree settings when enabled', async () => {
|
||||||
// Given: full flow setup
|
mockInteractiveMode.mockResolvedValue({ action: 'execute', task: 'Task content' });
|
||||||
setupFullFlowMocks();
|
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir);
|
|
||||||
|
|
||||||
// Then: task file created with summarized content
|
|
||||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
|
||||||
const taskFile = path.join(tasksDir, 'add-auth.yaml');
|
|
||||||
expect(fs.existsSync(taskFile)).toBe(true);
|
|
||||||
|
|
||||||
const content = fs.readFileSync(taskFile, 'utf-8');
|
|
||||||
expect(content).toContain('# 認証機能追加');
|
|
||||||
expect(content).toContain('JWT認証を実装する');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should use first line of task for filename generation', async () => {
|
|
||||||
setupFullFlowMocks({
|
|
||||||
task: 'First line summary\nSecond line details',
|
|
||||||
slug: 'first-line',
|
|
||||||
});
|
|
||||||
|
|
||||||
await addTask(testDir);
|
|
||||||
|
|
||||||
expect(mockSummarizeTaskName).toHaveBeenCalledWith('First line summary', { cwd: testDir });
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should append counter for duplicate filenames', async () => {
|
|
||||||
// Given: first task creates 'my-task.yaml'
|
|
||||||
setupFullFlowMocks({ slug: 'my-task' });
|
|
||||||
await addTask(testDir);
|
|
||||||
|
|
||||||
// When: create second task with same slug
|
|
||||||
setupFullFlowMocks({ slug: 'my-task' });
|
|
||||||
await addTask(testDir);
|
|
||||||
|
|
||||||
// Then: second file has counter
|
|
||||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
|
||||||
expect(fs.existsSync(path.join(tasksDir, 'my-task.yaml'))).toBe(true);
|
|
||||||
expect(fs.existsSync(path.join(tasksDir, 'my-task-1.yaml'))).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should include worktree option when confirmed', async () => {
|
|
||||||
// Given: user confirms worktree
|
|
||||||
setupFullFlowMocks({ slug: 'with-worktree' });
|
|
||||||
mockConfirm.mockResolvedValue(true);
|
mockConfirm.mockResolvedValue(true);
|
||||||
mockPromptInput.mockResolvedValue('');
|
mockPromptInput.mockResolvedValueOnce('/custom/path').mockResolvedValueOnce('feat/branch');
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir);
|
await addTask(testDir);
|
||||||
|
|
||||||
// Then
|
const task = loadTasks(testDir).tasks[0]!;
|
||||||
const taskFile = path.join(testDir, '.takt', 'tasks', 'with-worktree.yaml');
|
expect(task.worktree).toBe('/custom/path');
|
||||||
const content = fs.readFileSync(taskFile, 'utf-8');
|
expect(task.branch).toBe('feat/branch');
|
||||||
expect(content).toContain('worktree: true');
|
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should include custom worktree path when provided', async () => {
|
it('should create task from issue reference without interactive mode', async () => {
|
||||||
// Given: user provides custom worktree path
|
mockResolveIssueTask.mockReturnValue('Issue #99: Fix login timeout');
|
||||||
setupFullFlowMocks({ slug: 'custom-path' });
|
|
||||||
mockConfirm.mockResolvedValue(true);
|
|
||||||
mockPromptInput
|
|
||||||
.mockResolvedValueOnce('/custom/path')
|
|
||||||
.mockResolvedValueOnce('');
|
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir);
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const taskFile = path.join(testDir, '.takt', 'tasks', 'custom-path.yaml');
|
|
||||||
const content = fs.readFileSync(taskFile, 'utf-8');
|
|
||||||
expect(content).toContain('worktree: /custom/path');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should include branch when provided', async () => {
|
|
||||||
// Given: user provides custom branch
|
|
||||||
setupFullFlowMocks({ slug: 'with-branch' });
|
|
||||||
mockConfirm.mockResolvedValue(true);
|
|
||||||
mockPromptInput
|
|
||||||
.mockResolvedValueOnce('')
|
|
||||||
.mockResolvedValueOnce('feat/my-branch');
|
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir);
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const taskFile = path.join(testDir, '.takt', 'tasks', 'with-branch.yaml');
|
|
||||||
const content = fs.readFileSync(taskFile, 'utf-8');
|
|
||||||
expect(content).toContain('branch: feat/my-branch');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should include piece selection in task file', async () => {
|
|
||||||
// Given: determinePiece returns a non-default piece
|
|
||||||
setupFullFlowMocks({ slug: 'with-piece' });
|
|
||||||
mockDeterminePiece.mockResolvedValue('review');
|
|
||||||
mockGetPieceDescription.mockReturnValue({ name: 'review', description: 'Code review piece', pieceStructure: '' });
|
|
||||||
mockConfirm.mockResolvedValue(false);
|
mockConfirm.mockResolvedValue(false);
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir);
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const taskFile = path.join(testDir, '.takt', 'tasks', 'with-piece.yaml');
|
|
||||||
const content = fs.readFileSync(taskFile, 'utf-8');
|
|
||||||
expect(content).toContain('piece: review');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should cancel when piece selection returns null', async () => {
|
|
||||||
// Given: user cancels piece selection
|
|
||||||
mockDeterminePiece.mockResolvedValue(null);
|
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir);
|
|
||||||
|
|
||||||
// Then: no task file created (cancelled at piece selection)
|
|
||||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
|
||||||
const files = fs.readdirSync(tasksDir);
|
|
||||||
expect(files.length).toBe(0);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should always include piece from determinePiece', async () => {
|
|
||||||
// Given: determinePiece returns 'default'
|
|
||||||
setupFullFlowMocks({ slug: 'default-wf' });
|
|
||||||
mockDeterminePiece.mockResolvedValue('default');
|
|
||||||
mockConfirm.mockResolvedValue(false);
|
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir);
|
|
||||||
|
|
||||||
// Then: piece field is included
|
|
||||||
const taskFile = path.join(testDir, '.takt', 'tasks', 'default-wf.yaml');
|
|
||||||
const content = fs.readFileSync(taskFile, 'utf-8');
|
|
||||||
expect(content).toContain('piece: default');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should fetch issue and use directly as task content when given issue reference', async () => {
|
|
||||||
// Given: issue reference "#99"
|
|
||||||
const issueText = 'Issue #99: Fix login timeout\n\nThe login page times out after 30 seconds.';
|
|
||||||
mockResolveIssueTask.mockReturnValue(issueText);
|
|
||||||
mockDeterminePiece.mockResolvedValue('default');
|
|
||||||
|
|
||||||
mockSummarizeTaskName.mockResolvedValue('fix-login-timeout');
|
|
||||||
mockConfirm.mockResolvedValue(false);
|
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir, '#99');
|
await addTask(testDir, '#99');
|
||||||
|
|
||||||
// Then: interactiveMode should NOT be called
|
|
||||||
expect(mockInteractiveMode).not.toHaveBeenCalled();
|
expect(mockInteractiveMode).not.toHaveBeenCalled();
|
||||||
|
const task = loadTasks(testDir).tasks[0]!;
|
||||||
// Then: resolveIssueTask was called
|
expect(task.content).toContain('Fix login timeout');
|
||||||
expect(mockResolveIssueTask).toHaveBeenCalledWith('#99');
|
expect(task.issue).toBe(99);
|
||||||
|
|
||||||
// Then: determinePiece was called for piece selection
|
|
||||||
expect(mockDeterminePiece).toHaveBeenCalledWith(testDir);
|
|
||||||
|
|
||||||
// Then: task file created with issue text directly (no AI summarization)
|
|
||||||
const taskFile = path.join(testDir, '.takt', 'tasks', 'fix-login-timeout.yaml');
|
|
||||||
expect(fs.existsSync(taskFile)).toBe(true);
|
|
||||||
const content = fs.readFileSync(taskFile, 'utf-8');
|
|
||||||
expect(content).toContain('Fix login timeout');
|
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should proceed to worktree settings after issue fetch', async () => {
|
it('should not create task when piece selection is cancelled', async () => {
|
||||||
// Given: issue with worktree enabled
|
|
||||||
mockResolveIssueTask.mockReturnValue('Issue text');
|
|
||||||
mockDeterminePiece.mockResolvedValue('default');
|
|
||||||
mockSummarizeTaskName.mockResolvedValue('issue-task');
|
|
||||||
mockConfirm.mockResolvedValue(true);
|
|
||||||
mockPromptInput
|
|
||||||
.mockResolvedValueOnce('') // worktree path (auto)
|
|
||||||
.mockResolvedValueOnce(''); // branch name (auto)
|
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir, '#42');
|
|
||||||
|
|
||||||
// Then: worktree settings applied
|
|
||||||
const taskFile = path.join(testDir, '.takt', 'tasks', 'issue-task.yaml');
|
|
||||||
const content = fs.readFileSync(taskFile, 'utf-8');
|
|
||||||
expect(content).toContain('worktree: true');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should handle GitHub API failure gracefully for issue reference', async () => {
|
|
||||||
// Given: resolveIssueTask throws
|
|
||||||
mockResolveIssueTask.mockImplementation(() => {
|
|
||||||
throw new Error('GitHub API rate limit exceeded');
|
|
||||||
});
|
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir, '#99');
|
|
||||||
|
|
||||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
|
||||||
const files = fs.readdirSync(tasksDir);
|
|
||||||
expect(files.length).toBe(0);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should include issue number in task file when issue reference is used', async () => {
|
|
||||||
// Given: issue reference "#99"
|
|
||||||
const issueText = 'Issue #99: Fix login timeout';
|
|
||||||
mockResolveIssueTask.mockReturnValue(issueText);
|
|
||||||
mockDeterminePiece.mockResolvedValue('default');
|
|
||||||
mockSummarizeTaskName.mockResolvedValue('fix-login-timeout');
|
|
||||||
mockConfirm.mockResolvedValue(false);
|
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir, '#99');
|
|
||||||
|
|
||||||
// Then: task file contains issue field
|
|
||||||
const taskFile = path.join(testDir, '.takt', 'tasks', 'fix-login-timeout.yaml');
|
|
||||||
expect(fs.existsSync(taskFile)).toBe(true);
|
|
||||||
const content = fs.readFileSync(taskFile, 'utf-8');
|
|
||||||
expect(content).toContain('issue: 99');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should include piece selection in task file when issue reference is used', async () => {
|
|
||||||
// Given: issue reference "#99" with non-default piece selection
|
|
||||||
const issueText = 'Issue #99: Fix login timeout';
|
|
||||||
mockResolveIssueTask.mockReturnValue(issueText);
|
|
||||||
mockDeterminePiece.mockResolvedValue('review');
|
|
||||||
mockSummarizeTaskName.mockResolvedValue('fix-login-timeout');
|
|
||||||
mockConfirm.mockResolvedValue(false);
|
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir, '#99');
|
|
||||||
|
|
||||||
// Then: task file contains piece field
|
|
||||||
const taskFile = path.join(testDir, '.takt', 'tasks', 'fix-login-timeout.yaml');
|
|
||||||
expect(fs.existsSync(taskFile)).toBe(true);
|
|
||||||
const content = fs.readFileSync(taskFile, 'utf-8');
|
|
||||||
expect(content).toContain('piece: review');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should cancel when piece selection returns null for issue reference', async () => {
|
|
||||||
// Given: issue fetched successfully but user cancels piece selection
|
|
||||||
const issueText = 'Issue #99: Fix login timeout';
|
|
||||||
mockResolveIssueTask.mockReturnValue(issueText);
|
|
||||||
mockDeterminePiece.mockResolvedValue(null);
|
mockDeterminePiece.mockResolvedValue(null);
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir, '#99');
|
|
||||||
|
|
||||||
// Then: no task file created (cancelled at piece selection)
|
|
||||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
|
||||||
const files = fs.readdirSync(tasksDir);
|
|
||||||
expect(files.length).toBe(0);
|
|
||||||
|
|
||||||
// Then: issue was fetched before cancellation
|
|
||||||
expect(mockResolveIssueTask).toHaveBeenCalledWith('#99');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should call auto-PR confirm with default true', async () => {
|
|
||||||
// Given: worktree is confirmed so auto-PR prompt is reached
|
|
||||||
setupFullFlowMocks({ slug: 'auto-pr-default' });
|
|
||||||
mockConfirm.mockResolvedValue(true);
|
|
||||||
mockPromptInput.mockResolvedValue('');
|
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir);
|
await addTask(testDir);
|
||||||
|
|
||||||
// Then: second confirm call (Auto-create PR?) has defaultYes=true
|
expect(fs.existsSync(path.join(testDir, '.takt', 'tasks.yaml'))).toBe(false);
|
||||||
const autoPrCall = mockConfirm.mock.calls.find(
|
|
||||||
(call) => call[0] === 'Auto-create PR?',
|
|
||||||
);
|
|
||||||
expect(autoPrCall).toBeDefined();
|
|
||||||
expect(autoPrCall![1]).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('create_issue action', () => {
|
|
||||||
it('should call createIssue when create_issue action is selected', async () => {
|
|
||||||
// Given: interactive mode returns create_issue action
|
|
||||||
const task = 'Create a new feature\nWith detailed description';
|
|
||||||
mockDeterminePiece.mockResolvedValue('default');
|
|
||||||
mockInteractiveMode.mockResolvedValue({ action: 'create_issue', task });
|
|
||||||
mockCreateIssue.mockReturnValue({ success: true, url: 'https://github.com/owner/repo/issues/1' });
|
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir);
|
|
||||||
|
|
||||||
// Then: createIssue is called via createIssueFromTask
|
|
||||||
expect(mockCreateIssue).toHaveBeenCalledWith({
|
|
||||||
title: 'Create a new feature',
|
|
||||||
body: task,
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not create task file when create_issue action is selected', async () => {
|
|
||||||
// Given: interactive mode returns create_issue action
|
|
||||||
mockDeterminePiece.mockResolvedValue('default');
|
|
||||||
mockInteractiveMode.mockResolvedValue({ action: 'create_issue', task: 'Some task' });
|
|
||||||
mockCreateIssue.mockReturnValue({ success: true, url: 'https://github.com/owner/repo/issues/1' });
|
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir);
|
|
||||||
|
|
||||||
// Then: no task file created
|
|
||||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
|
||||||
const files = fs.existsSync(tasksDir) ? fs.readdirSync(tasksDir) : [];
|
|
||||||
expect(files.length).toBe(0);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not prompt for worktree settings when create_issue action is selected', async () => {
|
|
||||||
// Given: interactive mode returns create_issue action
|
|
||||||
mockDeterminePiece.mockResolvedValue('default');
|
|
||||||
mockInteractiveMode.mockResolvedValue({ action: 'create_issue', task: 'Some task' });
|
|
||||||
mockCreateIssue.mockReturnValue({ success: true, url: 'https://github.com/owner/repo/issues/1' });
|
|
||||||
|
|
||||||
// When
|
|
||||||
await addTask(testDir);
|
|
||||||
|
|
||||||
// Then: confirm (worktree prompt) is never called
|
|
||||||
expect(mockConfirm).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
347
src/__tests__/aggregate-evaluator.test.ts
Normal file
347
src/__tests__/aggregate-evaluator.test.ts
Normal file
@ -0,0 +1,347 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for AggregateEvaluator
|
||||||
|
*
|
||||||
|
* Tests all()/any() aggregate condition evaluation against sub-movement results.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import { AggregateEvaluator } from '../core/piece/evaluation/AggregateEvaluator.js';
|
||||||
|
import type { PieceMovement, PieceState, AgentResponse } from '../core/models/types.js';
|
||||||
|
|
||||||
|
function makeState(outputs: Record<string, { matchedRuleIndex?: number }>): PieceState {
|
||||||
|
const movementOutputs = new Map<string, AgentResponse>();
|
||||||
|
for (const [name, data] of Object.entries(outputs)) {
|
||||||
|
movementOutputs.set(name, {
|
||||||
|
persona: name,
|
||||||
|
status: 'done',
|
||||||
|
content: '',
|
||||||
|
timestamp: new Date(),
|
||||||
|
matchedRuleIndex: data.matchedRuleIndex,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
pieceName: 'test',
|
||||||
|
currentMovement: 'parent',
|
||||||
|
iteration: 1,
|
||||||
|
movementOutputs,
|
||||||
|
userInputs: [],
|
||||||
|
personaSessions: new Map(),
|
||||||
|
movementIterations: new Map(),
|
||||||
|
status: 'running',
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeSubMovement(name: string, conditions: string[]): PieceMovement {
|
||||||
|
return {
|
||||||
|
name,
|
||||||
|
personaDisplayName: name,
|
||||||
|
instructionTemplate: '',
|
||||||
|
passPreviousResponse: false,
|
||||||
|
rules: conditions.map((c) => ({ condition: c })),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeParentMovement(
|
||||||
|
parallel: PieceMovement[],
|
||||||
|
rules: PieceMovement['rules'],
|
||||||
|
): PieceMovement {
|
||||||
|
return {
|
||||||
|
name: 'parent',
|
||||||
|
personaDisplayName: 'parent',
|
||||||
|
instructionTemplate: '',
|
||||||
|
passPreviousResponse: false,
|
||||||
|
parallel,
|
||||||
|
rules,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('AggregateEvaluator', () => {
|
||||||
|
describe('all() with single condition', () => {
|
||||||
|
it('should match when all sub-movements have matching condition', () => {
|
||||||
|
const sub1 = makeSubMovement('review-a', ['approved', 'rejected']);
|
||||||
|
const sub2 = makeSubMovement('review-b', ['approved', 'rejected']);
|
||||||
|
|
||||||
|
const step = makeParentMovement([sub1, sub2], [
|
||||||
|
{
|
||||||
|
condition: 'all approved',
|
||||||
|
isAggregateCondition: true,
|
||||||
|
aggregateType: 'all',
|
||||||
|
aggregateConditionText: 'approved',
|
||||||
|
next: 'COMPLETE',
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
|
||||||
|
// Both sub-movements matched rule index 0 ("approved")
|
||||||
|
const state = makeState({
|
||||||
|
'review-a': { matchedRuleIndex: 0 },
|
||||||
|
'review-b': { matchedRuleIndex: 0 },
|
||||||
|
});
|
||||||
|
|
||||||
|
const evaluator = new AggregateEvaluator(step, state);
|
||||||
|
expect(evaluator.evaluate()).toBe(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not match when one sub-movement has different condition', () => {
|
||||||
|
const sub1 = makeSubMovement('review-a', ['approved', 'rejected']);
|
||||||
|
const sub2 = makeSubMovement('review-b', ['approved', 'rejected']);
|
||||||
|
|
||||||
|
const step = makeParentMovement([sub1, sub2], [
|
||||||
|
{
|
||||||
|
condition: 'all approved',
|
||||||
|
isAggregateCondition: true,
|
||||||
|
aggregateType: 'all',
|
||||||
|
aggregateConditionText: 'approved',
|
||||||
|
next: 'COMPLETE',
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
|
||||||
|
// sub1 matched "approved" (index 0), sub2 matched "rejected" (index 1)
|
||||||
|
const state = makeState({
|
||||||
|
'review-a': { matchedRuleIndex: 0 },
|
||||||
|
'review-b': { matchedRuleIndex: 1 },
|
||||||
|
});
|
||||||
|
|
||||||
|
const evaluator = new AggregateEvaluator(step, state);
|
||||||
|
expect(evaluator.evaluate()).toBe(-1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not match when sub-movement has no matched rule', () => {
|
||||||
|
const sub1 = makeSubMovement('review-a', ['approved', 'rejected']);
|
||||||
|
const sub2 = makeSubMovement('review-b', ['approved', 'rejected']);
|
||||||
|
|
||||||
|
const step = makeParentMovement([sub1, sub2], [
|
||||||
|
{
|
||||||
|
condition: 'all approved',
|
||||||
|
isAggregateCondition: true,
|
||||||
|
aggregateType: 'all',
|
||||||
|
aggregateConditionText: 'approved',
|
||||||
|
next: 'COMPLETE',
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
|
||||||
|
// sub2 has no matched rule
|
||||||
|
const state = makeState({
|
||||||
|
'review-a': { matchedRuleIndex: 0 },
|
||||||
|
'review-b': {},
|
||||||
|
});
|
||||||
|
|
||||||
|
const evaluator = new AggregateEvaluator(step, state);
|
||||||
|
expect(evaluator.evaluate()).toBe(-1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('all() with multiple conditions (order-based)', () => {
|
||||||
|
it('should match when each sub-movement matches its corresponding condition', () => {
|
||||||
|
const sub1 = makeSubMovement('review-a', ['approved', 'rejected']);
|
||||||
|
const sub2 = makeSubMovement('review-b', ['approved', 'rejected']);
|
||||||
|
|
||||||
|
const step = makeParentMovement([sub1, sub2], [
|
||||||
|
{
|
||||||
|
condition: 'A approved, B rejected',
|
||||||
|
isAggregateCondition: true,
|
||||||
|
aggregateType: 'all',
|
||||||
|
aggregateConditionText: ['approved', 'rejected'],
|
||||||
|
next: 'COMPLETE',
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
|
||||||
|
const state = makeState({
|
||||||
|
'review-a': { matchedRuleIndex: 0 }, // "approved"
|
||||||
|
'review-b': { matchedRuleIndex: 1 }, // "rejected"
|
||||||
|
});
|
||||||
|
|
||||||
|
const evaluator = new AggregateEvaluator(step, state);
|
||||||
|
expect(evaluator.evaluate()).toBe(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not match when condition count differs from sub-movement count', () => {
|
||||||
|
const sub1 = makeSubMovement('review-a', ['approved']);
|
||||||
|
|
||||||
|
const step = makeParentMovement([sub1], [
|
||||||
|
{
|
||||||
|
condition: 'mismatch',
|
||||||
|
isAggregateCondition: true,
|
||||||
|
aggregateType: 'all',
|
||||||
|
aggregateConditionText: ['approved', 'rejected'],
|
||||||
|
next: 'COMPLETE',
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
|
||||||
|
const state = makeState({
|
||||||
|
'review-a': { matchedRuleIndex: 0 },
|
||||||
|
});
|
||||||
|
|
||||||
|
const evaluator = new AggregateEvaluator(step, state);
|
||||||
|
expect(evaluator.evaluate()).toBe(-1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('any() with single condition', () => {
|
||||||
|
it('should match when at least one sub-movement has matching condition', () => {
|
||||||
|
const sub1 = makeSubMovement('review-a', ['approved', 'rejected']);
|
||||||
|
const sub2 = makeSubMovement('review-b', ['approved', 'rejected']);
|
||||||
|
|
||||||
|
const step = makeParentMovement([sub1, sub2], [
|
||||||
|
{
|
||||||
|
condition: 'any approved',
|
||||||
|
isAggregateCondition: true,
|
||||||
|
aggregateType: 'any',
|
||||||
|
aggregateConditionText: 'approved',
|
||||||
|
next: 'fix',
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
|
||||||
|
// Only sub1 matched "approved"
|
||||||
|
const state = makeState({
|
||||||
|
'review-a': { matchedRuleIndex: 0 },
|
||||||
|
'review-b': { matchedRuleIndex: 1 },
|
||||||
|
});
|
||||||
|
|
||||||
|
const evaluator = new AggregateEvaluator(step, state);
|
||||||
|
expect(evaluator.evaluate()).toBe(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not match when no sub-movement has matching condition', () => {
|
||||||
|
const sub1 = makeSubMovement('review-a', ['approved', 'rejected']);
|
||||||
|
const sub2 = makeSubMovement('review-b', ['approved', 'rejected']);
|
||||||
|
|
||||||
|
const step = makeParentMovement([sub1, sub2], [
|
||||||
|
{
|
||||||
|
condition: 'any approved',
|
||||||
|
isAggregateCondition: true,
|
||||||
|
aggregateType: 'any',
|
||||||
|
aggregateConditionText: 'approved',
|
||||||
|
next: 'fix',
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
|
||||||
|
// Both matched "rejected" (index 1)
|
||||||
|
const state = makeState({
|
||||||
|
'review-a': { matchedRuleIndex: 1 },
|
||||||
|
'review-b': { matchedRuleIndex: 1 },
|
||||||
|
});
|
||||||
|
|
||||||
|
const evaluator = new AggregateEvaluator(step, state);
|
||||||
|
expect(evaluator.evaluate()).toBe(-1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('any() with multiple conditions', () => {
|
||||||
|
it('should match when any sub-movement matches any of the conditions', () => {
|
||||||
|
const sub1 = makeSubMovement('review-a', ['approved', 'rejected', 'needs-work']);
|
||||||
|
const sub2 = makeSubMovement('review-b', ['approved', 'rejected', 'needs-work']);
|
||||||
|
|
||||||
|
const step = makeParentMovement([sub1, sub2], [
|
||||||
|
{
|
||||||
|
condition: 'any approved or needs-work',
|
||||||
|
isAggregateCondition: true,
|
||||||
|
aggregateType: 'any',
|
||||||
|
aggregateConditionText: ['approved', 'needs-work'],
|
||||||
|
next: 'fix',
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
|
||||||
|
// sub1 matched "rejected" (index 1), sub2 matched "needs-work" (index 2)
|
||||||
|
const state = makeState({
|
||||||
|
'review-a': { matchedRuleIndex: 1 },
|
||||||
|
'review-b': { matchedRuleIndex: 2 },
|
||||||
|
});
|
||||||
|
|
||||||
|
const evaluator = new AggregateEvaluator(step, state);
|
||||||
|
expect(evaluator.evaluate()).toBe(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('edge cases', () => {
|
||||||
|
it('should return -1 when step has no rules', () => {
|
||||||
|
const step = makeParentMovement([], undefined);
|
||||||
|
const state = makeState({});
|
||||||
|
const evaluator = new AggregateEvaluator(step, state);
|
||||||
|
expect(evaluator.evaluate()).toBe(-1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return -1 when step has no parallel sub-movements', () => {
|
||||||
|
const step: PieceMovement = {
|
||||||
|
name: 'test-movement',
|
||||||
|
personaDisplayName: 'tester',
|
||||||
|
instructionTemplate: '',
|
||||||
|
passPreviousResponse: false,
|
||||||
|
rules: [
|
||||||
|
{
|
||||||
|
condition: 'all approved',
|
||||||
|
isAggregateCondition: true,
|
||||||
|
aggregateType: 'all',
|
||||||
|
aggregateConditionText: 'approved',
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
const state = makeState({});
|
||||||
|
const evaluator = new AggregateEvaluator(step, state);
|
||||||
|
expect(evaluator.evaluate()).toBe(-1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return -1 when rules exist but none are aggregate conditions', () => {
|
||||||
|
const sub1 = makeSubMovement('review-a', ['approved']);
|
||||||
|
const step = makeParentMovement([sub1], [
|
||||||
|
{ condition: 'approved', next: 'COMPLETE' },
|
||||||
|
]);
|
||||||
|
const state = makeState({ 'review-a': { matchedRuleIndex: 0 } });
|
||||||
|
const evaluator = new AggregateEvaluator(step, state);
|
||||||
|
expect(evaluator.evaluate()).toBe(-1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should evaluate multiple rules and return first matching index', () => {
|
||||||
|
const sub1 = makeSubMovement('review-a', ['approved', 'rejected']);
|
||||||
|
const sub2 = makeSubMovement('review-b', ['approved', 'rejected']);
|
||||||
|
|
||||||
|
const step = makeParentMovement([sub1, sub2], [
|
||||||
|
{
|
||||||
|
condition: 'all approved',
|
||||||
|
isAggregateCondition: true,
|
||||||
|
aggregateType: 'all',
|
||||||
|
aggregateConditionText: 'approved',
|
||||||
|
next: 'COMPLETE',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
condition: 'any rejected',
|
||||||
|
isAggregateCondition: true,
|
||||||
|
aggregateType: 'any',
|
||||||
|
aggregateConditionText: 'rejected',
|
||||||
|
next: 'fix',
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
|
||||||
|
// sub1: approved, sub2: rejected → first rule (all approved) fails, second (any rejected) matches
|
||||||
|
const state = makeState({
|
||||||
|
'review-a': { matchedRuleIndex: 0 },
|
||||||
|
'review-b': { matchedRuleIndex: 1 },
|
||||||
|
});
|
||||||
|
|
||||||
|
const evaluator = new AggregateEvaluator(step, state);
|
||||||
|
expect(evaluator.evaluate()).toBe(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should skip sub-movements missing from state outputs', () => {
|
||||||
|
const sub1 = makeSubMovement('review-a', ['approved']);
|
||||||
|
const sub2 = makeSubMovement('review-b', ['approved']);
|
||||||
|
|
||||||
|
const step = makeParentMovement([sub1, sub2], [
|
||||||
|
{
|
||||||
|
condition: 'all approved',
|
||||||
|
isAggregateCondition: true,
|
||||||
|
aggregateType: 'all',
|
||||||
|
aggregateConditionText: 'approved',
|
||||||
|
next: 'COMPLETE',
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
|
||||||
|
// review-b is missing from state
|
||||||
|
const state = makeState({
|
||||||
|
'review-a': { matchedRuleIndex: 0 },
|
||||||
|
});
|
||||||
|
|
||||||
|
const evaluator = new AggregateEvaluator(step, state);
|
||||||
|
expect(evaluator.evaluate()).toBe(-1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
110
src/__tests__/blocked-handler.test.ts
Normal file
110
src/__tests__/blocked-handler.test.ts
Normal file
@ -0,0 +1,110 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for blocked-handler
|
||||||
|
*
|
||||||
|
* Tests blocked state handling including user input callback flow.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect, vi } from 'vitest';
|
||||||
|
import { handleBlocked } from '../core/piece/engine/blocked-handler.js';
|
||||||
|
import type { PieceMovement, AgentResponse } from '../core/models/types.js';
|
||||||
|
import type { PieceEngineOptions } from '../core/piece/types.js';
|
||||||
|
|
||||||
|
function makeMovement(): PieceMovement {
|
||||||
|
return {
|
||||||
|
name: 'test-movement',
|
||||||
|
personaDisplayName: 'tester',
|
||||||
|
instructionTemplate: '',
|
||||||
|
passPreviousResponse: false,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeResponse(content: string): AgentResponse {
|
||||||
|
return {
|
||||||
|
persona: 'tester',
|
||||||
|
status: 'blocked',
|
||||||
|
content,
|
||||||
|
timestamp: new Date(),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeOptions(overrides: Partial<PieceEngineOptions> = {}): PieceEngineOptions {
|
||||||
|
return {
|
||||||
|
projectCwd: '/tmp/project',
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('handleBlocked', () => {
|
||||||
|
it('should return shouldContinue=false when no onUserInput callback', async () => {
|
||||||
|
const result = await handleBlocked(
|
||||||
|
makeMovement(),
|
||||||
|
makeResponse('blocked message'),
|
||||||
|
makeOptions(),
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.shouldContinue).toBe(false);
|
||||||
|
expect(result.userInput).toBeUndefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should call onUserInput and return user input', async () => {
|
||||||
|
const onUserInput = vi.fn().mockResolvedValue('user response');
|
||||||
|
const result = await handleBlocked(
|
||||||
|
makeMovement(),
|
||||||
|
makeResponse('質問: どうしますか?'),
|
||||||
|
makeOptions({ onUserInput }),
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.shouldContinue).toBe(true);
|
||||||
|
expect(result.userInput).toBe('user response');
|
||||||
|
expect(onUserInput).toHaveBeenCalledOnce();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return shouldContinue=false when user cancels (returns null)', async () => {
|
||||||
|
const onUserInput = vi.fn().mockResolvedValue(null);
|
||||||
|
const result = await handleBlocked(
|
||||||
|
makeMovement(),
|
||||||
|
makeResponse('blocked'),
|
||||||
|
makeOptions({ onUserInput }),
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.shouldContinue).toBe(false);
|
||||||
|
expect(result.userInput).toBeUndefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should pass extracted prompt in the request', async () => {
|
||||||
|
const onUserInput = vi.fn().mockResolvedValue('answer');
|
||||||
|
await handleBlocked(
|
||||||
|
makeMovement(),
|
||||||
|
makeResponse('質問: 環境は何ですか?'),
|
||||||
|
makeOptions({ onUserInput }),
|
||||||
|
);
|
||||||
|
|
||||||
|
const request = onUserInput.mock.calls[0]![0];
|
||||||
|
expect(request.prompt).toBe('環境は何ですか?');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should pass the full content as prompt when no pattern matches', async () => {
|
||||||
|
const onUserInput = vi.fn().mockResolvedValue('answer');
|
||||||
|
const content = 'I need more information to continue';
|
||||||
|
await handleBlocked(
|
||||||
|
makeMovement(),
|
||||||
|
makeResponse(content),
|
||||||
|
makeOptions({ onUserInput }),
|
||||||
|
);
|
||||||
|
|
||||||
|
const request = onUserInput.mock.calls[0]![0];
|
||||||
|
expect(request.prompt).toBe(content);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should pass movement and response in the request', async () => {
|
||||||
|
const step = makeMovement();
|
||||||
|
const response = makeResponse('blocked');
|
||||||
|
const onUserInput = vi.fn().mockResolvedValue('answer');
|
||||||
|
|
||||||
|
await handleBlocked(step, response, makeOptions({ onUserInput }));
|
||||||
|
|
||||||
|
const request = onUserInput.mock.calls[0]![0];
|
||||||
|
expect(request.movement).toBe(step);
|
||||||
|
expect(request.response).toBe(response);
|
||||||
|
});
|
||||||
|
});
|
||||||
288
src/__tests__/branchGitResolver.performance.test.ts
Normal file
288
src/__tests__/branchGitResolver.performance.test.ts
Normal file
@ -0,0 +1,288 @@
|
|||||||
|
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||||
|
|
||||||
|
vi.mock('node:child_process', () => ({
|
||||||
|
execFileSync: vi.fn(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
import { execFileSync } from 'node:child_process';
|
||||||
|
import {
|
||||||
|
createBranchBaseResolutionCache,
|
||||||
|
findFirstTaktCommit,
|
||||||
|
resolveBranchBaseCommit,
|
||||||
|
} from '../infra/task/branchGitResolver.js';
|
||||||
|
|
||||||
|
const mockExecFileSync = vi.mocked(execFileSync);
|
||||||
|
|
||||||
|
describe('branchGitResolver performance', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.clearAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should skip full ref scan when default branch candidate resolves', () => {
|
||||||
|
mockExecFileSync.mockImplementation((cmd, args) => {
|
||||||
|
if (cmd !== 'git') {
|
||||||
|
throw new Error('unexpected command');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'reflog') {
|
||||||
|
throw new Error('reflog unavailable');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'merge-base' && args[1] === 'main') {
|
||||||
|
return 'base-main';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'merge-base' && args[1] === 'origin/main') {
|
||||||
|
throw new Error('origin/main not available');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'rev-list') {
|
||||||
|
return '1';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'log' && args[1] === '--format=%s') {
|
||||||
|
return 'takt: first';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'for-each-ref') {
|
||||||
|
throw new Error('for-each-ref should not be called');
|
||||||
|
}
|
||||||
|
|
||||||
|
throw new Error(`unexpected git args: ${args.join(' ')}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
const baseCommit = resolveBranchBaseCommit('/project', 'main', 'takt/feature-a');
|
||||||
|
|
||||||
|
expect(baseCommit).toBe('base-main');
|
||||||
|
expect(mockExecFileSync).not.toHaveBeenCalledWith(
|
||||||
|
'git',
|
||||||
|
['for-each-ref', '--format=%(refname:short)', 'refs/heads', 'refs/remotes'],
|
||||||
|
expect.anything(),
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should skip full ref scan when default branch candidate resolves without takt prefix', () => {
|
||||||
|
mockExecFileSync.mockImplementation((cmd, args) => {
|
||||||
|
if (cmd !== 'git') {
|
||||||
|
throw new Error('unexpected command');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'reflog') {
|
||||||
|
throw new Error('reflog unavailable');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'merge-base' && args[1] === 'main') {
|
||||||
|
return 'base-main';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'merge-base' && args[1] === 'origin/main') {
|
||||||
|
throw new Error('origin/main not available');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'rev-list') {
|
||||||
|
return '3';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'log' && args[1] === '--format=%s') {
|
||||||
|
return 'feat: add new feature';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'for-each-ref') {
|
||||||
|
throw new Error('for-each-ref should not be called');
|
||||||
|
}
|
||||||
|
|
||||||
|
throw new Error(`unexpected git args: ${args.join(' ')}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
const baseCommit = resolveBranchBaseCommit('/project', 'main', 'takt/feature-a');
|
||||||
|
|
||||||
|
expect(baseCommit).toBe('base-main');
|
||||||
|
expect(mockExecFileSync).not.toHaveBeenCalledWith(
|
||||||
|
'git',
|
||||||
|
['for-each-ref', '--format=%(refname:short)', 'refs/heads', 'refs/remotes'],
|
||||||
|
expect.anything(),
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reuse ref list cache across branch resolutions', () => {
|
||||||
|
mockExecFileSync.mockImplementation((cmd, args) => {
|
||||||
|
if (cmd !== 'git') {
|
||||||
|
throw new Error('unexpected command');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'reflog') {
|
||||||
|
throw new Error('reflog unavailable');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'merge-base') {
|
||||||
|
const baseRef = args[1];
|
||||||
|
const branch = args[2];
|
||||||
|
if (baseRef === 'main' || baseRef === 'origin/main') {
|
||||||
|
throw new Error('priority refs unavailable');
|
||||||
|
}
|
||||||
|
if (baseRef === 'develop' && branch === 'takt/feature-a') {
|
||||||
|
return 'base-a';
|
||||||
|
}
|
||||||
|
if (baseRef === 'origin/develop' && branch === 'takt/feature-a') {
|
||||||
|
return 'base-a-remote';
|
||||||
|
}
|
||||||
|
if (baseRef === 'develop' && branch === 'takt/feature-b') {
|
||||||
|
return 'base-b';
|
||||||
|
}
|
||||||
|
if (baseRef === 'origin/develop' && branch === 'takt/feature-b') {
|
||||||
|
return 'base-b-remote';
|
||||||
|
}
|
||||||
|
throw new Error(`unexpected merge-base args: ${args.join(' ')}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'for-each-ref') {
|
||||||
|
return 'develop\norigin/develop\n';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'rev-parse' && args[1] === '--git-common-dir') {
|
||||||
|
return '/project/.git';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'rev-list') {
|
||||||
|
const range = args[3];
|
||||||
|
if (range === 'base-a..takt/feature-a') {
|
||||||
|
return '1';
|
||||||
|
}
|
||||||
|
if (range === 'base-a-remote..takt/feature-a') {
|
||||||
|
return '5';
|
||||||
|
}
|
||||||
|
if (range === 'base-b..takt/feature-b') {
|
||||||
|
return '1';
|
||||||
|
}
|
||||||
|
if (range === 'base-b-remote..takt/feature-b') {
|
||||||
|
return '6';
|
||||||
|
}
|
||||||
|
throw new Error(`unexpected rev-list args: ${args.join(' ')}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'log' && args[1] === '--format=%s') {
|
||||||
|
return 'takt: first';
|
||||||
|
}
|
||||||
|
|
||||||
|
throw new Error(`unexpected git args: ${args.join(' ')}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
const cache = createBranchBaseResolutionCache();
|
||||||
|
const baseA = resolveBranchBaseCommit('/project', 'main', 'takt/feature-a', cache);
|
||||||
|
const baseB = resolveBranchBaseCommit('/project', 'main', 'takt/feature-b', cache);
|
||||||
|
|
||||||
|
expect(baseA).toBe('base-a');
|
||||||
|
expect(baseB).toBe('base-b');
|
||||||
|
|
||||||
|
const forEachRefCalls = mockExecFileSync.mock.calls.filter(([, args]) =>
|
||||||
|
args[0] === 'for-each-ref',
|
||||||
|
);
|
||||||
|
expect(forEachRefCalls).toHaveLength(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should skip reflog lookup when baseCommit is provided to findFirstTaktCommit', () => {
|
||||||
|
mockExecFileSync.mockImplementation((cmd, args) => {
|
||||||
|
if (cmd !== 'git') {
|
||||||
|
throw new Error('unexpected command');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'reflog') {
|
||||||
|
throw new Error('reflog should not be called');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'log' && args[1] === '--format=%H\t%s') {
|
||||||
|
return 'abc123\ttakt: first instruction\n';
|
||||||
|
}
|
||||||
|
|
||||||
|
throw new Error(`unexpected git args: ${args.join(' ')}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
const first = findFirstTaktCommit('/project', 'main', 'takt/feature-a', { baseCommit: 'base-a' });
|
||||||
|
|
||||||
|
expect(first).toEqual({ subject: 'takt: first instruction' });
|
||||||
|
expect(mockExecFileSync).not.toHaveBeenCalledWith(
|
||||||
|
'git',
|
||||||
|
['reflog', 'show', '--format=%H', 'takt/feature-a'],
|
||||||
|
expect.anything(),
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reuse ref list cache across worktrees in the same repository', () => {
|
||||||
|
mockExecFileSync.mockImplementation((cmd, args, options) => {
|
||||||
|
if (cmd !== 'git') {
|
||||||
|
throw new Error('unexpected command');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'reflog') {
|
||||||
|
throw new Error('reflog unavailable');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'rev-parse' && args[1] === '--git-common-dir') {
|
||||||
|
if (options?.cwd === '/repo/worktrees/a' || options?.cwd === '/repo/worktrees/b') {
|
||||||
|
return '/repo/.git';
|
||||||
|
}
|
||||||
|
throw new Error(`unexpected rev-parse cwd: ${String(options?.cwd)}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'merge-base') {
|
||||||
|
const baseRef = args[1];
|
||||||
|
const branch = args[2];
|
||||||
|
if (baseRef === 'main' || baseRef === 'origin/main') {
|
||||||
|
throw new Error('priority refs unavailable');
|
||||||
|
}
|
||||||
|
if (baseRef === 'develop' && branch === 'takt/feature-a') {
|
||||||
|
return 'base-a';
|
||||||
|
}
|
||||||
|
if (baseRef === 'origin/develop' && branch === 'takt/feature-a') {
|
||||||
|
return 'base-a-remote';
|
||||||
|
}
|
||||||
|
if (baseRef === 'develop' && branch === 'takt/feature-b') {
|
||||||
|
return 'base-b';
|
||||||
|
}
|
||||||
|
if (baseRef === 'origin/develop' && branch === 'takt/feature-b') {
|
||||||
|
return 'base-b-remote';
|
||||||
|
}
|
||||||
|
throw new Error(`unexpected merge-base args: ${args.join(' ')}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'for-each-ref') {
|
||||||
|
return 'develop\norigin/develop\n';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'rev-list') {
|
||||||
|
const range = args[3];
|
||||||
|
if (range === 'base-a..takt/feature-a') {
|
||||||
|
return '1';
|
||||||
|
}
|
||||||
|
if (range === 'base-a-remote..takt/feature-a') {
|
||||||
|
return '5';
|
||||||
|
}
|
||||||
|
if (range === 'base-b..takt/feature-b') {
|
||||||
|
return '1';
|
||||||
|
}
|
||||||
|
if (range === 'base-b-remote..takt/feature-b') {
|
||||||
|
return '6';
|
||||||
|
}
|
||||||
|
throw new Error(`unexpected rev-list args: ${args.join(' ')}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'log' && args[1] === '--format=%s') {
|
||||||
|
return 'takt: first';
|
||||||
|
}
|
||||||
|
|
||||||
|
throw new Error(`unexpected git args: ${args.join(' ')}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
const cache = createBranchBaseResolutionCache();
|
||||||
|
const baseA = resolveBranchBaseCommit('/repo/worktrees/a', 'main', 'takt/feature-a', cache);
|
||||||
|
const baseB = resolveBranchBaseCommit('/repo/worktrees/b', 'main', 'takt/feature-b', cache);
|
||||||
|
|
||||||
|
expect(baseA).toBe('base-a');
|
||||||
|
expect(baseB).toBe('base-b');
|
||||||
|
|
||||||
|
const forEachRefCalls = mockExecFileSync.mock.calls.filter(([, args]) =>
|
||||||
|
args[0] === 'for-each-ref',
|
||||||
|
);
|
||||||
|
expect(forEachRefCalls).toHaveLength(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
145
src/__tests__/branchList.regression.test.ts
Normal file
145
src/__tests__/branchList.regression.test.ts
Normal file
@ -0,0 +1,145 @@
|
|||||||
|
import { execFileSync } from 'node:child_process';
|
||||||
|
import { mkdtempSync, rmSync, writeFileSync } from 'node:fs';
|
||||||
|
import { join } from 'node:path';
|
||||||
|
import { tmpdir } from 'node:os';
|
||||||
|
import { afterEach, describe, expect, it } from 'vitest';
|
||||||
|
import { getFilesChanged, getOriginalInstruction } from '../infra/task/branchList.js';
|
||||||
|
|
||||||
|
function runGit(cwd: string, args: string[]): string {
|
||||||
|
return execFileSync('git', args, { cwd, encoding: 'utf-8', stdio: 'pipe' }).trim();
|
||||||
|
}
|
||||||
|
|
||||||
|
function isUnsupportedInitBranchOptionError(error: unknown): boolean {
|
||||||
|
if (!(error instanceof Error)) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
return /unknown switch [`'-]?b/.test(error.message);
|
||||||
|
}
|
||||||
|
|
||||||
|
function writeAndCommit(repo: string, fileName: string, content: string, message: string): void {
|
||||||
|
writeFileSync(join(repo, fileName), content, 'utf-8');
|
||||||
|
runGit(repo, ['add', fileName]);
|
||||||
|
runGit(repo, ['commit', '-m', message]);
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupRepoForIssue167(options?: { disableReflog?: boolean; firstBranchCommitMessage?: string }): { repoDir: string; branch: string } {
|
||||||
|
const repoDir = mkdtempSync(join(tmpdir(), 'takt-branchlist-'));
|
||||||
|
try {
|
||||||
|
runGit(repoDir, ['init', '-b', 'main']);
|
||||||
|
} catch (error) {
|
||||||
|
if (!isUnsupportedInitBranchOptionError(error)) {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
runGit(repoDir, ['init']);
|
||||||
|
}
|
||||||
|
if (options?.disableReflog) {
|
||||||
|
runGit(repoDir, ['config', 'core.logallrefupdates', 'false']);
|
||||||
|
}
|
||||||
|
runGit(repoDir, ['config', 'user.name', 'takt-test']);
|
||||||
|
runGit(repoDir, ['config', 'user.email', 'takt-test@example.com']);
|
||||||
|
|
||||||
|
writeAndCommit(repoDir, 'main.txt', 'main\n', 'main base');
|
||||||
|
runGit(repoDir, ['branch', '-M', 'main']);
|
||||||
|
|
||||||
|
runGit(repoDir, ['checkout', '-b', 'develop']);
|
||||||
|
writeAndCommit(repoDir, 'develop-a.txt', 'develop a\n', 'develop commit A');
|
||||||
|
writeAndCommit(repoDir, 'develop-takt.txt', 'develop takt\n', 'takt: old instruction on develop');
|
||||||
|
writeAndCommit(repoDir, 'develop-b.txt', 'develop b\n', 'develop commit B');
|
||||||
|
|
||||||
|
const taktBranch = 'takt/#167/fix-original-instruction';
|
||||||
|
runGit(repoDir, ['checkout', '-b', taktBranch]);
|
||||||
|
const firstBranchCommitMessage = options?.firstBranchCommitMessage ?? 'takt: github-issue-167-fix-original-instruction';
|
||||||
|
writeAndCommit(repoDir, 'task-1.txt', 'task1\n', firstBranchCommitMessage);
|
||||||
|
writeAndCommit(repoDir, 'task-2.txt', 'task2\n', 'follow-up implementation');
|
||||||
|
|
||||||
|
return { repoDir, branch: taktBranch };
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('branchList regression for issue #167', () => {
|
||||||
|
const tempDirs: string[] = [];
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
while (tempDirs.length > 0) {
|
||||||
|
const dir = tempDirs.pop();
|
||||||
|
if (dir) {
|
||||||
|
rmSync(dir, { recursive: true, force: true });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve originalInstruction correctly even when HEAD is main', () => {
|
||||||
|
const fixture = setupRepoForIssue167();
|
||||||
|
tempDirs.push(fixture.repoDir);
|
||||||
|
runGit(fixture.repoDir, ['checkout', 'main']);
|
||||||
|
|
||||||
|
const instruction = getOriginalInstruction(fixture.repoDir, 'main', fixture.branch);
|
||||||
|
|
||||||
|
expect(instruction).toBe('github-issue-167-fix-original-instruction');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should keep filesChanged non-zero even when HEAD is target branch', () => {
|
||||||
|
const fixture = setupRepoForIssue167();
|
||||||
|
tempDirs.push(fixture.repoDir);
|
||||||
|
runGit(fixture.repoDir, ['checkout', fixture.branch]);
|
||||||
|
|
||||||
|
const changed = getFilesChanged(fixture.repoDir, 'main', fixture.branch);
|
||||||
|
|
||||||
|
expect(changed).toBe(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ignore takt commits that exist only on base branch history', () => {
|
||||||
|
const fixture = setupRepoForIssue167();
|
||||||
|
tempDirs.push(fixture.repoDir);
|
||||||
|
runGit(fixture.repoDir, ['checkout', 'main']);
|
||||||
|
|
||||||
|
const instruction = getOriginalInstruction(fixture.repoDir, 'main', fixture.branch);
|
||||||
|
const changed = getFilesChanged(fixture.repoDir, 'main', fixture.branch);
|
||||||
|
|
||||||
|
expect(instruction).toBe('github-issue-167-fix-original-instruction');
|
||||||
|
expect(changed).toBe(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should keep original instruction and changed files after merging branch into develop', () => {
|
||||||
|
const fixture = setupRepoForIssue167();
|
||||||
|
tempDirs.push(fixture.repoDir);
|
||||||
|
|
||||||
|
runGit(fixture.repoDir, ['checkout', 'develop']);
|
||||||
|
runGit(fixture.repoDir, ['merge', '--no-ff', fixture.branch, '-m', 'merge takt branch']);
|
||||||
|
runGit(fixture.repoDir, ['checkout', 'main']);
|
||||||
|
|
||||||
|
const instruction = getOriginalInstruction(fixture.repoDir, 'main', fixture.branch);
|
||||||
|
const changed = getFilesChanged(fixture.repoDir, 'main', fixture.branch);
|
||||||
|
|
||||||
|
expect(instruction).toBe('github-issue-167-fix-original-instruction');
|
||||||
|
expect(changed).toBe(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve correctly without branch reflog by inferring base from refs', () => {
|
||||||
|
const fixture = setupRepoForIssue167({ disableReflog: true });
|
||||||
|
tempDirs.push(fixture.repoDir);
|
||||||
|
runGit(fixture.repoDir, ['checkout', 'main']);
|
||||||
|
|
||||||
|
const instruction = getOriginalInstruction(fixture.repoDir, 'main', fixture.branch);
|
||||||
|
const changed = getFilesChanged(fixture.repoDir, 'main', fixture.branch);
|
||||||
|
|
||||||
|
// Priority ref (main) resolves immediately without full ref scan (#191).
|
||||||
|
// With main as base, the first takt commit found is from develop's history.
|
||||||
|
expect(instruction).toBe('old instruction on develop');
|
||||||
|
expect(changed).toBe(5);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should use inferred branch base when first branch commit has no takt prefix and reflog is unavailable', () => {
|
||||||
|
const fixture = setupRepoForIssue167({
|
||||||
|
disableReflog: true,
|
||||||
|
firstBranchCommitMessage: 'Initial branch implementation',
|
||||||
|
});
|
||||||
|
tempDirs.push(fixture.repoDir);
|
||||||
|
runGit(fixture.repoDir, ['checkout', 'main']);
|
||||||
|
|
||||||
|
const instruction = getOriginalInstruction(fixture.repoDir, 'main', fixture.branch);
|
||||||
|
|
||||||
|
// Priority ref (main) resolves immediately without full ref scan (#191).
|
||||||
|
// With main as base, the first takt commit found is from develop's history.
|
||||||
|
expect(instruction).toBe('old instruction on develop');
|
||||||
|
});
|
||||||
|
});
|
||||||
67
src/__tests__/buildListItems.performance.test.ts
Normal file
67
src/__tests__/buildListItems.performance.test.ts
Normal file
@ -0,0 +1,67 @@
|
|||||||
|
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||||
|
|
||||||
|
vi.mock('node:child_process', () => ({
|
||||||
|
execFileSync: vi.fn(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../infra/task/branchGitResolver.js', () => ({
|
||||||
|
createBranchBaseResolutionCache: vi.fn(() => ({
|
||||||
|
allCandidateRefsByRepositoryKey: new Map<string, string[]>(),
|
||||||
|
repositoryKeyByGitCwd: new Map<string, string>(),
|
||||||
|
})),
|
||||||
|
resolveGitCwd: vi.fn((cwd: string, worktreePath?: string) => worktreePath ?? cwd),
|
||||||
|
resolveBranchBaseCommit: vi.fn((_: string, __: string, branch: string) => `base-${branch}`),
|
||||||
|
findFirstTaktCommit: vi.fn((_: string, __: string, branch: string) => ({ subject: `takt: instruction-${branch}` })),
|
||||||
|
}));
|
||||||
|
|
||||||
|
import { execFileSync } from 'node:child_process';
|
||||||
|
import {
|
||||||
|
buildListItems,
|
||||||
|
type BranchInfo,
|
||||||
|
} from '../infra/task/branchList.js';
|
||||||
|
import {
|
||||||
|
findFirstTaktCommit,
|
||||||
|
resolveBranchBaseCommit,
|
||||||
|
} from '../infra/task/branchGitResolver.js';
|
||||||
|
|
||||||
|
const mockExecFileSync = vi.mocked(execFileSync);
|
||||||
|
const mockResolveBranchBaseCommit = vi.mocked(resolveBranchBaseCommit);
|
||||||
|
const mockFindFirstTaktCommit = vi.mocked(findFirstTaktCommit);
|
||||||
|
|
||||||
|
describe('buildListItems performance', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.clearAllMocks();
|
||||||
|
mockExecFileSync.mockImplementation((cmd, args) => {
|
||||||
|
if (cmd === 'git' && args[0] === 'diff') {
|
||||||
|
return '1\t0\tfile.ts\n';
|
||||||
|
}
|
||||||
|
throw new Error(`Unexpected git args: ${args.join(' ')}`);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve base commit once per branch and reuse it for files/instruction', () => {
|
||||||
|
const branches: BranchInfo[] = [
|
||||||
|
{ branch: 'takt/20260128-task-a', commit: 'abc123' },
|
||||||
|
{ branch: 'takt/20260128-task-b', commit: 'def456' },
|
||||||
|
];
|
||||||
|
|
||||||
|
const items = buildListItems('/project', branches, 'main');
|
||||||
|
|
||||||
|
expect(items).toHaveLength(2);
|
||||||
|
expect(mockResolveBranchBaseCommit).toHaveBeenCalledTimes(2);
|
||||||
|
expect(mockFindFirstTaktCommit).toHaveBeenNthCalledWith(
|
||||||
|
1,
|
||||||
|
'/project',
|
||||||
|
'main',
|
||||||
|
'takt/20260128-task-a',
|
||||||
|
expect.objectContaining({ baseCommit: 'base-takt/20260128-task-a' }),
|
||||||
|
);
|
||||||
|
expect(mockFindFirstTaktCommit).toHaveBeenNthCalledWith(
|
||||||
|
2,
|
||||||
|
'/project',
|
||||||
|
'main',
|
||||||
|
'takt/20260128-task-b',
|
||||||
|
expect.objectContaining({ baseCommit: 'base-takt/20260128-task-b' }),
|
||||||
|
);
|
||||||
|
});
|
||||||
|
});
|
||||||
@ -44,6 +44,15 @@ describe('getBuiltinPiece', () => {
|
|||||||
expect(piece!.name).toBe('default');
|
expect(piece!.name).toBe('default');
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should resolve builtin instruction_template without projectCwd', () => {
|
||||||
|
const piece = getBuiltinPiece('default');
|
||||||
|
expect(piece).not.toBeNull();
|
||||||
|
|
||||||
|
const planMovement = piece!.movements.find((movement) => movement.name === 'plan');
|
||||||
|
expect(planMovement).toBeDefined();
|
||||||
|
expect(planMovement!.instructionTemplate).not.toBe('plan');
|
||||||
|
});
|
||||||
|
|
||||||
it('should return null for non-existent piece names', () => {
|
it('should return null for non-existent piece names', () => {
|
||||||
expect(getBuiltinPiece('nonexistent-piece')).toBeNull();
|
expect(getBuiltinPiece('nonexistent-piece')).toBeNull();
|
||||||
expect(getBuiltinPiece('unknown')).toBeNull();
|
expect(getBuiltinPiece('unknown')).toBeNull();
|
||||||
@ -208,7 +217,7 @@ describe('loadPiece (builtin fallback)', () => {
|
|||||||
expect(piece).toBeNull();
|
expect(piece).toBeNull();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should load builtin pieces like minimal, research', () => {
|
it('should load builtin pieces like minimal, research, e2e-test', () => {
|
||||||
const minimal = loadPiece('minimal', process.cwd());
|
const minimal = loadPiece('minimal', process.cwd());
|
||||||
expect(minimal).not.toBeNull();
|
expect(minimal).not.toBeNull();
|
||||||
expect(minimal!.name).toBe('minimal');
|
expect(minimal!.name).toBe('minimal');
|
||||||
@ -216,6 +225,10 @@ describe('loadPiece (builtin fallback)', () => {
|
|||||||
const research = loadPiece('research', process.cwd());
|
const research = loadPiece('research', process.cwd());
|
||||||
expect(research).not.toBeNull();
|
expect(research).not.toBeNull();
|
||||||
expect(research!.name).toBe('research');
|
expect(research!.name).toBe('research');
|
||||||
|
|
||||||
|
const e2eTest = loadPiece('e2e-test', process.cwd());
|
||||||
|
expect(e2eTest).not.toBeNull();
|
||||||
|
expect(e2eTest!.name).toBe('e2e-test');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -237,6 +250,7 @@ describe('listPieces (builtin fallback)', () => {
|
|||||||
const pieces = listPieces(testDir);
|
const pieces = listPieces(testDir);
|
||||||
expect(pieces).toContain('default');
|
expect(pieces).toContain('default');
|
||||||
expect(pieces).toContain('minimal');
|
expect(pieces).toContain('minimal');
|
||||||
|
expect(pieces).toContain('e2e-test');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return sorted list', () => {
|
it('should return sorted list', () => {
|
||||||
|
|||||||
@ -141,4 +141,28 @@ describe('PieceEngine Integration: Blocked Handling', () => {
|
|||||||
expect(userInputFn).toHaveBeenCalledOnce();
|
expect(userInputFn).toHaveBeenCalledOnce();
|
||||||
expect(state.userInputs).toContain('User provided clarification');
|
expect(state.userInputs).toContain('User provided clarification');
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should abort immediately when movement returns error status', async () => {
|
||||||
|
const config = buildDefaultPieceConfig();
|
||||||
|
const onUserInput = vi.fn().mockResolvedValueOnce('should not be called');
|
||||||
|
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir, onUserInput });
|
||||||
|
|
||||||
|
mockRunAgentSequence([
|
||||||
|
makeResponse({ persona: 'plan', status: 'error', content: 'Transport error', error: 'Transport error' }),
|
||||||
|
]);
|
||||||
|
|
||||||
|
mockDetectMatchedRuleSequence([
|
||||||
|
{ index: 0, method: 'phase1_tag' },
|
||||||
|
]);
|
||||||
|
|
||||||
|
const abortFn = vi.fn();
|
||||||
|
engine.on('piece:abort', abortFn);
|
||||||
|
|
||||||
|
const state = await engine.run();
|
||||||
|
|
||||||
|
expect(state.status).toBe('aborted');
|
||||||
|
expect(onUserInput).not.toHaveBeenCalled();
|
||||||
|
expect(abortFn).toHaveBeenCalledWith(expect.anything(), expect.stringContaining('Transport error'));
|
||||||
|
});
|
||||||
|
|
||||||
});
|
});
|
||||||
|
|||||||
@ -4,7 +4,7 @@
|
|||||||
* Covers:
|
* Covers:
|
||||||
* - One sub-movement fails while another succeeds → piece continues
|
* - One sub-movement fails while another succeeds → piece continues
|
||||||
* - All sub-movements fail → piece aborts
|
* - All sub-movements fail → piece aborts
|
||||||
* - Failed sub-movement is recorded as blocked with error
|
* - Failed sub-movement is recorded as error with error message
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
@ -141,10 +141,10 @@ describe('PieceEngine Integration: Parallel Movement Partial Failure', () => {
|
|||||||
|
|
||||||
expect(state.status).toBe('completed');
|
expect(state.status).toBe('completed');
|
||||||
|
|
||||||
// arch-review should be recorded as blocked
|
// arch-review should be recorded as error
|
||||||
const archReviewOutput = state.movementOutputs.get('arch-review');
|
const archReviewOutput = state.movementOutputs.get('arch-review');
|
||||||
expect(archReviewOutput).toBeDefined();
|
expect(archReviewOutput).toBeDefined();
|
||||||
expect(archReviewOutput!.status).toBe('blocked');
|
expect(archReviewOutput!.status).toBe('error');
|
||||||
expect(archReviewOutput!.error).toContain('exit');
|
expect(archReviewOutput!.error).toContain('exit');
|
||||||
|
|
||||||
// security-review should be recorded as done
|
// security-review should be recorded as done
|
||||||
|
|||||||
39
src/__tests__/error-utils.test.ts
Normal file
39
src/__tests__/error-utils.test.ts
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for error utilities
|
||||||
|
*
|
||||||
|
* Tests error message extraction from unknown error types.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import { getErrorMessage } from '../shared/utils/error.js';
|
||||||
|
|
||||||
|
describe('getErrorMessage', () => {
|
||||||
|
it('should extract message from Error instances', () => {
|
||||||
|
expect(getErrorMessage(new Error('test error'))).toBe('test error');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should extract message from Error subclasses', () => {
|
||||||
|
expect(getErrorMessage(new TypeError('type error'))).toBe('type error');
|
||||||
|
expect(getErrorMessage(new RangeError('range error'))).toBe('range error');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should convert string to message', () => {
|
||||||
|
expect(getErrorMessage('string error')).toBe('string error');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should convert number to message', () => {
|
||||||
|
expect(getErrorMessage(42)).toBe('42');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should convert null to message', () => {
|
||||||
|
expect(getErrorMessage(null)).toBe('null');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should convert undefined to message', () => {
|
||||||
|
expect(getErrorMessage(undefined)).toBe('undefined');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should convert object to message', () => {
|
||||||
|
expect(getErrorMessage({ code: 'ERR' })).toBe('[object Object]');
|
||||||
|
});
|
||||||
|
});
|
||||||
190
src/__tests__/escape.test.ts
Normal file
190
src/__tests__/escape.test.ts
Normal file
@ -0,0 +1,190 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for template escaping and placeholder replacement
|
||||||
|
*
|
||||||
|
* Tests escapeTemplateChars and replaceTemplatePlaceholders functions.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import {
|
||||||
|
escapeTemplateChars,
|
||||||
|
replaceTemplatePlaceholders,
|
||||||
|
} from '../core/piece/instruction/escape.js';
|
||||||
|
import type { PieceMovement } from '../core/models/types.js';
|
||||||
|
import type { InstructionContext } from '../core/piece/instruction/instruction-context.js';
|
||||||
|
|
||||||
|
function makeMovement(overrides: Partial<PieceMovement> = {}): PieceMovement {
|
||||||
|
return {
|
||||||
|
name: 'test-movement',
|
||||||
|
personaDisplayName: 'tester',
|
||||||
|
instructionTemplate: '',
|
||||||
|
passPreviousResponse: false,
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeContext(overrides: Partial<InstructionContext> = {}): InstructionContext {
|
||||||
|
return {
|
||||||
|
task: 'test task',
|
||||||
|
iteration: 1,
|
||||||
|
maxIterations: 10,
|
||||||
|
movementIteration: 1,
|
||||||
|
cwd: '/tmp/test',
|
||||||
|
projectCwd: '/tmp/project',
|
||||||
|
userInputs: [],
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('escapeTemplateChars', () => {
|
||||||
|
it('should replace curly braces with full-width equivalents', () => {
|
||||||
|
expect(escapeTemplateChars('{hello}')).toBe('{hello}');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle multiple braces', () => {
|
||||||
|
expect(escapeTemplateChars('{{nested}}')).toBe('{{nested}}');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return unchanged string when no braces', () => {
|
||||||
|
expect(escapeTemplateChars('no braces here')).toBe('no braces here');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle empty string', () => {
|
||||||
|
expect(escapeTemplateChars('')).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle braces in code snippets', () => {
|
||||||
|
const input = 'function foo() { return { a: 1 }; }';
|
||||||
|
const expected = 'function foo() { return { a: 1 }; }';
|
||||||
|
expect(escapeTemplateChars(input)).toBe(expected);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('replaceTemplatePlaceholders', () => {
|
||||||
|
it('should replace {task} placeholder', () => {
|
||||||
|
const step = makeMovement();
|
||||||
|
const ctx = makeContext({ task: 'implement feature X' });
|
||||||
|
const template = 'Your task is: {task}';
|
||||||
|
|
||||||
|
const result = replaceTemplatePlaceholders(template, step, ctx);
|
||||||
|
expect(result).toBe('Your task is: implement feature X');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should escape braces in task content', () => {
|
||||||
|
const step = makeMovement();
|
||||||
|
const ctx = makeContext({ task: 'fix {bug} in code' });
|
||||||
|
const template = '{task}';
|
||||||
|
|
||||||
|
const result = replaceTemplatePlaceholders(template, step, ctx);
|
||||||
|
expect(result).toBe('fix {bug} in code');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace {iteration} and {max_iterations}', () => {
|
||||||
|
const step = makeMovement();
|
||||||
|
const ctx = makeContext({ iteration: 3, maxIterations: 20 });
|
||||||
|
const template = 'Iteration {iteration}/{max_iterations}';
|
||||||
|
|
||||||
|
const result = replaceTemplatePlaceholders(template, step, ctx);
|
||||||
|
expect(result).toBe('Iteration 3/20');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace {movement_iteration}', () => {
|
||||||
|
const step = makeMovement();
|
||||||
|
const ctx = makeContext({ movementIteration: 5 });
|
||||||
|
const template = 'Movement run #{movement_iteration}';
|
||||||
|
|
||||||
|
const result = replaceTemplatePlaceholders(template, step, ctx);
|
||||||
|
expect(result).toBe('Movement run #5');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace {previous_response} when passPreviousResponse is true', () => {
|
||||||
|
const step = makeMovement({ passPreviousResponse: true });
|
||||||
|
const ctx = makeContext({
|
||||||
|
previousOutput: {
|
||||||
|
persona: 'coder',
|
||||||
|
status: 'done',
|
||||||
|
content: 'previous output text',
|
||||||
|
timestamp: new Date(),
|
||||||
|
},
|
||||||
|
});
|
||||||
|
const template = 'Previous: {previous_response}';
|
||||||
|
|
||||||
|
const result = replaceTemplatePlaceholders(template, step, ctx);
|
||||||
|
expect(result).toBe('Previous: previous output text');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace {previous_response} with empty string when no previous output', () => {
|
||||||
|
const step = makeMovement({ passPreviousResponse: true });
|
||||||
|
const ctx = makeContext();
|
||||||
|
const template = 'Previous: {previous_response}';
|
||||||
|
|
||||||
|
const result = replaceTemplatePlaceholders(template, step, ctx);
|
||||||
|
expect(result).toBe('Previous: ');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not replace {previous_response} when passPreviousResponse is false', () => {
|
||||||
|
const step = makeMovement({ passPreviousResponse: false });
|
||||||
|
const ctx = makeContext({
|
||||||
|
previousOutput: {
|
||||||
|
persona: 'coder',
|
||||||
|
status: 'done',
|
||||||
|
content: 'should not appear',
|
||||||
|
timestamp: new Date(),
|
||||||
|
},
|
||||||
|
});
|
||||||
|
const template = 'Previous: {previous_response}';
|
||||||
|
|
||||||
|
const result = replaceTemplatePlaceholders(template, step, ctx);
|
||||||
|
expect(result).toBe('Previous: {previous_response}');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace {user_inputs} with joined inputs', () => {
|
||||||
|
const step = makeMovement();
|
||||||
|
const ctx = makeContext({ userInputs: ['input 1', 'input 2', 'input 3'] });
|
||||||
|
const template = 'Inputs: {user_inputs}';
|
||||||
|
|
||||||
|
const result = replaceTemplatePlaceholders(template, step, ctx);
|
||||||
|
expect(result).toBe('Inputs: input 1\ninput 2\ninput 3');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace {report_dir} with report directory', () => {
|
||||||
|
const step = makeMovement();
|
||||||
|
const ctx = makeContext({ reportDir: '/tmp/reports/run-1' });
|
||||||
|
const template = 'Reports: {report_dir}';
|
||||||
|
|
||||||
|
const result = replaceTemplatePlaceholders(template, step, ctx);
|
||||||
|
expect(result).toBe('Reports: /tmp/reports/run-1');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace {report:filename} with full path', () => {
|
||||||
|
const step = makeMovement();
|
||||||
|
const ctx = makeContext({ reportDir: '/tmp/reports' });
|
||||||
|
const template = 'Read {report:review.md} and {report:plan.md}';
|
||||||
|
|
||||||
|
const result = replaceTemplatePlaceholders(template, step, ctx);
|
||||||
|
expect(result).toBe('Read /tmp/reports/review.md and /tmp/reports/plan.md');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle template with multiple different placeholders', () => {
|
||||||
|
const step = makeMovement();
|
||||||
|
const ctx = makeContext({
|
||||||
|
task: 'test task',
|
||||||
|
iteration: 2,
|
||||||
|
maxIterations: 5,
|
||||||
|
movementIteration: 1,
|
||||||
|
reportDir: '/reports',
|
||||||
|
});
|
||||||
|
const template = '{task} - iter {iteration}/{max_iterations} - mv {movement_iteration} - dir {report_dir}';
|
||||||
|
|
||||||
|
const result = replaceTemplatePlaceholders(template, step, ctx);
|
||||||
|
expect(result).toBe('test task - iter 2/5 - mv 1 - dir /reports');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should leave unreplaced placeholders when reportDir is undefined', () => {
|
||||||
|
const step = makeMovement();
|
||||||
|
const ctx = makeContext({ reportDir: undefined });
|
||||||
|
const template = 'Dir: {report_dir} File: {report:test.md}';
|
||||||
|
|
||||||
|
const result = replaceTemplatePlaceholders(template, step, ctx);
|
||||||
|
expect(result).toBe('Dir: {report_dir} File: {report:test.md}');
|
||||||
|
});
|
||||||
|
});
|
||||||
@ -508,4 +508,110 @@ describe('normalizePieceConfig with layer resolution', () => {
|
|||||||
expect(config.movements[0]!.knowledgeContents).toBeDefined();
|
expect(config.movements[0]!.knowledgeContents).toBeDefined();
|
||||||
expect(config.movements[0]!.knowledgeContents![0]).toBe('# Domain Knowledge');
|
expect(config.movements[0]!.knowledgeContents![0]).toBe('# Domain Knowledge');
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should resolve instruction_template from section map before layer resolution', () => {
|
||||||
|
const raw = {
|
||||||
|
name: 'test-piece',
|
||||||
|
instructions: {
|
||||||
|
implement: 'Mapped instruction template',
|
||||||
|
},
|
||||||
|
movements: [
|
||||||
|
{
|
||||||
|
name: 'step1',
|
||||||
|
persona: 'coder',
|
||||||
|
instruction_template: 'implement',
|
||||||
|
instruction: '{task}',
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
|
||||||
|
const context: FacetResolutionContext = { projectDir, lang: 'ja' };
|
||||||
|
const config = normalizePieceConfig(raw, pieceDir, context);
|
||||||
|
|
||||||
|
expect(config.movements[0]!.instructionTemplate).toBe('Mapped instruction template');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve instruction_template by name via layer resolution', () => {
|
||||||
|
const instructionsDir = join(projectDir, '.takt', 'instructions');
|
||||||
|
mkdirSync(instructionsDir, { recursive: true });
|
||||||
|
writeFileSync(join(instructionsDir, 'implement.md'), 'Project implement template');
|
||||||
|
|
||||||
|
const raw = {
|
||||||
|
name: 'test-piece',
|
||||||
|
movements: [
|
||||||
|
{
|
||||||
|
name: 'step1',
|
||||||
|
persona: 'coder',
|
||||||
|
instruction_template: 'implement',
|
||||||
|
instruction: '{task}',
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
|
||||||
|
const context: FacetResolutionContext = { projectDir, lang: 'ja' };
|
||||||
|
const config = normalizePieceConfig(raw, pieceDir, context);
|
||||||
|
|
||||||
|
expect(config.movements[0]!.instructionTemplate).toBe('Project implement template');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should keep inline instruction_template when no facet is found', () => {
|
||||||
|
const inlineTemplate = `Use this inline template.
|
||||||
|
Second line remains inline.`;
|
||||||
|
const raw = {
|
||||||
|
name: 'test-piece',
|
||||||
|
movements: [
|
||||||
|
{
|
||||||
|
name: 'step1',
|
||||||
|
persona: 'coder',
|
||||||
|
instruction_template: inlineTemplate,
|
||||||
|
instruction: '{task}',
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
|
||||||
|
const context: FacetResolutionContext = { projectDir, lang: 'ja' };
|
||||||
|
const config = normalizePieceConfig(raw, pieceDir, context);
|
||||||
|
|
||||||
|
expect(config.movements[0]!.instructionTemplate).toBe(inlineTemplate);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve loop monitor judge instruction_template via layer resolution', () => {
|
||||||
|
const instructionsDir = join(projectDir, '.takt', 'instructions');
|
||||||
|
mkdirSync(instructionsDir, { recursive: true });
|
||||||
|
writeFileSync(join(instructionsDir, 'judge-template.md'), 'Project judge template');
|
||||||
|
|
||||||
|
const raw = {
|
||||||
|
name: 'test-piece',
|
||||||
|
movements: [
|
||||||
|
{
|
||||||
|
name: 'step1',
|
||||||
|
persona: 'coder',
|
||||||
|
instruction: '{task}',
|
||||||
|
rules: [{ condition: 'next', next: 'step2' }],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: 'step2',
|
||||||
|
persona: 'coder',
|
||||||
|
instruction: '{task}',
|
||||||
|
rules: [{ condition: 'done', next: 'COMPLETE' }],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
loop_monitors: [
|
||||||
|
{
|
||||||
|
cycle: ['step1', 'step2'],
|
||||||
|
threshold: 2,
|
||||||
|
judge: {
|
||||||
|
persona: 'coder',
|
||||||
|
instruction_template: 'judge-template',
|
||||||
|
rules: [{ condition: 'continue', next: 'step2' }],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
|
||||||
|
const context: FacetResolutionContext = { projectDir, lang: 'ja' };
|
||||||
|
const config = normalizePieceConfig(raw, pieceDir, context);
|
||||||
|
|
||||||
|
expect(config.loopMonitors?.[0]?.judge.instructionTemplate).toBe('Project judge template');
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
103
src/__tests__/getFilesChanged.test.ts
Normal file
103
src/__tests__/getFilesChanged.test.ts
Normal file
@ -0,0 +1,103 @@
|
|||||||
|
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||||
|
|
||||||
|
vi.mock('node:child_process', () => ({
|
||||||
|
execFileSync: vi.fn(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
import { execFileSync } from 'node:child_process';
|
||||||
|
const mockExecFileSync = vi.mocked(execFileSync);
|
||||||
|
|
||||||
|
import { getFilesChanged } from '../infra/task/branchList.js';
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.clearAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('getFilesChanged', () => {
|
||||||
|
it('should count changed files from branch entry base commit via reflog', () => {
|
||||||
|
mockExecFileSync
|
||||||
|
.mockReturnValueOnce('f00dbabe\nfeedface\nabc123\n')
|
||||||
|
.mockReturnValueOnce('1\t0\tfile1.ts\n2\t1\tfile2.ts\n');
|
||||||
|
|
||||||
|
const result = getFilesChanged('/project', 'main', 'takt/20260128-fix-auth');
|
||||||
|
|
||||||
|
expect(result).toBe(2);
|
||||||
|
expect(mockExecFileSync).toHaveBeenNthCalledWith(
|
||||||
|
2,
|
||||||
|
'git',
|
||||||
|
['diff', '--numstat', 'abc123..takt/20260128-fix-auth'],
|
||||||
|
expect.objectContaining({ cwd: '/project', encoding: 'utf-8' }),
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should infer base from refs when reflog is unavailable', () => {
|
||||||
|
let developMergeBaseCalls = 0;
|
||||||
|
mockExecFileSync.mockImplementation((cmd, args) => {
|
||||||
|
if (cmd !== 'git') {
|
||||||
|
throw new Error('unexpected command');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'reflog') {
|
||||||
|
throw new Error('reflog unavailable');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'merge-base' && args[1] === 'develop') {
|
||||||
|
developMergeBaseCalls += 1;
|
||||||
|
if (developMergeBaseCalls === 1) {
|
||||||
|
throw new Error('priority develop failed');
|
||||||
|
}
|
||||||
|
return 'base999\n';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'merge-base' && args[1] === 'origin/develop') {
|
||||||
|
throw new Error('priority origin/develop failed');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'rev-parse' && args[1] === '--git-common-dir') {
|
||||||
|
return '.git\n';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'for-each-ref') {
|
||||||
|
return 'develop\n';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'rev-list') {
|
||||||
|
return '1\n';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'log' && args[1] === '--format=%s') {
|
||||||
|
return 'takt: initial\n';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'diff' && args[1] === '--numstat') {
|
||||||
|
return '1\t0\tfile1.ts\n';
|
||||||
|
}
|
||||||
|
|
||||||
|
throw new Error(`Unexpected git args: ${args.join(' ')}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = getFilesChanged('/project', 'develop', 'takt/20260128-fix-auth');
|
||||||
|
|
||||||
|
expect(result).toBe(1);
|
||||||
|
expect(mockExecFileSync).toHaveBeenCalledWith(
|
||||||
|
'git',
|
||||||
|
['for-each-ref', '--format=%(refname:short)', 'refs/heads', 'refs/remotes'],
|
||||||
|
expect.objectContaining({ cwd: '/project', encoding: 'utf-8' }),
|
||||||
|
);
|
||||||
|
expect(mockExecFileSync).toHaveBeenCalledWith(
|
||||||
|
'git',
|
||||||
|
['merge-base', 'develop', 'takt/20260128-fix-auth'],
|
||||||
|
expect.objectContaining({ cwd: '/project', encoding: 'utf-8' }),
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return 0 when base commit resolution fails', () => {
|
||||||
|
mockExecFileSync.mockImplementation(() => {
|
||||||
|
throw new Error('base resolution failed');
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = getFilesChanged('/project', 'main', 'takt/20260128-fix-auth');
|
||||||
|
|
||||||
|
expect(result).toBe(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
@ -19,29 +19,95 @@ beforeEach(() => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
describe('getOriginalInstruction', () => {
|
describe('getOriginalInstruction', () => {
|
||||||
it('should extract instruction from takt-prefixed commit message', () => {
|
it('should extract instruction from branch entry commit via reflog', () => {
|
||||||
mockExecFileSync.mockReturnValue('takt: 認証機能を追加する\ntakt: fix-auth\n');
|
mockExecFileSync
|
||||||
|
.mockReturnValueOnce('last789\nfirst456\nbase123\n')
|
||||||
|
.mockReturnValueOnce('takt: 認証機能を追加する\n');
|
||||||
|
|
||||||
const result = getOriginalInstruction('/project', 'main', 'takt/20260128-fix-auth');
|
const result = getOriginalInstruction('/project', 'main', 'takt/20260128-fix-auth');
|
||||||
|
|
||||||
expect(result).toBe('認証機能を追加する');
|
expect(result).toBe('認証機能を追加する');
|
||||||
expect(mockExecFileSync).toHaveBeenCalledWith(
|
expect(mockExecFileSync).toHaveBeenCalledWith(
|
||||||
'git',
|
'git',
|
||||||
['log', '--format=%s', '--reverse', 'main..takt/20260128-fix-auth'],
|
['reflog', 'show', '--format=%H', 'takt/20260128-fix-auth'],
|
||||||
|
expect.objectContaining({ cwd: '/project', encoding: 'utf-8' }),
|
||||||
|
);
|
||||||
|
expect(mockExecFileSync).toHaveBeenCalledWith(
|
||||||
|
'git',
|
||||||
|
['show', '-s', '--format=%s', 'first456'],
|
||||||
expect.objectContaining({ cwd: '/project', encoding: 'utf-8' }),
|
expect.objectContaining({ cwd: '/project', encoding: 'utf-8' }),
|
||||||
);
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return first commit message without takt prefix if not present', () => {
|
it('should infer base from refs when reflog is unavailable', () => {
|
||||||
mockExecFileSync.mockReturnValue('Initial implementation\n');
|
let developMergeBaseCalls = 0;
|
||||||
|
mockExecFileSync.mockImplementation((cmd, args) => {
|
||||||
|
if (cmd !== 'git') {
|
||||||
|
throw new Error('unexpected command');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'reflog') {
|
||||||
|
throw new Error('reflog unavailable');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'merge-base' && args[1] === 'main') {
|
||||||
|
throw new Error('priority main failed');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'merge-base' && args[1] === 'origin/main') {
|
||||||
|
throw new Error('priority origin/main failed');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'rev-parse' && args[1] === '--git-common-dir') {
|
||||||
|
return '.git\n';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'for-each-ref') {
|
||||||
|
return 'develop\n';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'merge-base' && args[1] === 'develop') {
|
||||||
|
developMergeBaseCalls += 1;
|
||||||
|
if (developMergeBaseCalls === 1) {
|
||||||
|
return 'base123\n';
|
||||||
|
}
|
||||||
|
throw new Error('unexpected second develop merge-base');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'rev-list') {
|
||||||
|
return '2\n';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'log' && args[1] === '--format=%s') {
|
||||||
|
return 'takt: Initial implementation\nfollow-up\n';
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args[0] === 'log' && args[1] === '--format=%H\t%s') {
|
||||||
|
return 'first456\ttakt: Initial implementation\n';
|
||||||
|
}
|
||||||
|
|
||||||
|
throw new Error(`Unexpected git args: ${args.join(' ')}`);
|
||||||
|
});
|
||||||
|
|
||||||
const result = getOriginalInstruction('/project', 'main', 'takt/20260128-fix-auth');
|
const result = getOriginalInstruction('/project', 'main', 'takt/20260128-fix-auth');
|
||||||
|
|
||||||
expect(result).toBe('Initial implementation');
|
expect(result).toBe('Initial implementation');
|
||||||
|
expect(mockExecFileSync).toHaveBeenCalledWith(
|
||||||
|
'git',
|
||||||
|
['for-each-ref', '--format=%(refname:short)', 'refs/heads', 'refs/remotes'],
|
||||||
|
expect.objectContaining({ cwd: '/project', encoding: 'utf-8' }),
|
||||||
|
);
|
||||||
|
expect(mockExecFileSync).toHaveBeenCalledWith(
|
||||||
|
'git',
|
||||||
|
['merge-base', 'develop', 'takt/20260128-fix-auth'],
|
||||||
|
expect.objectContaining({ cwd: '/project', encoding: 'utf-8' }),
|
||||||
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return empty string when no commits on branch', () => {
|
it('should return empty string when no commits on branch', () => {
|
||||||
mockExecFileSync.mockReturnValue('');
|
mockExecFileSync
|
||||||
|
.mockReturnValueOnce('last789\nfirst456\nbase123\n')
|
||||||
|
.mockReturnValueOnce('');
|
||||||
|
|
||||||
const result = getOriginalInstruction('/project', 'main', 'takt/20260128-fix-auth');
|
const result = getOriginalInstruction('/project', 'main', 'takt/20260128-fix-auth');
|
||||||
|
|
||||||
@ -59,7 +125,9 @@ describe('getOriginalInstruction', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should handle multi-line commit messages (use only first line)', () => {
|
it('should handle multi-line commit messages (use only first line)', () => {
|
||||||
mockExecFileSync.mockReturnValue('takt: Fix the login bug\ntakt: follow-up fix\n');
|
mockExecFileSync
|
||||||
|
.mockReturnValueOnce('f00dbabe\ndeadbeef\nbase123\n')
|
||||||
|
.mockReturnValueOnce('takt: Fix the login bug\n');
|
||||||
|
|
||||||
const result = getOriginalInstruction('/project', 'main', 'takt/20260128-fix-login');
|
const result = getOriginalInstruction('/project', 'main', 'takt/20260128-fix-login');
|
||||||
|
|
||||||
@ -67,8 +135,9 @@ describe('getOriginalInstruction', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should return empty string when takt prefix has no content', () => {
|
it('should return empty string when takt prefix has no content', () => {
|
||||||
// "takt: \n" trimmed → "takt:", starts with "takt:" → slice + trim → ""
|
mockExecFileSync
|
||||||
mockExecFileSync.mockReturnValue('takt: \n');
|
.mockReturnValueOnce('cafebabe\nbase123\n')
|
||||||
|
.mockReturnValueOnce('takt:\n');
|
||||||
|
|
||||||
const result = getOriginalInstruction('/project', 'main', 'takt/20260128-task');
|
const result = getOriginalInstruction('/project', 'main', 'takt/20260128-task');
|
||||||
|
|
||||||
@ -76,22 +145,22 @@ describe('getOriginalInstruction', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should return instruction text when takt prefix has content', () => {
|
it('should return instruction text when takt prefix has content', () => {
|
||||||
mockExecFileSync.mockReturnValue('takt: add search feature\n');
|
mockExecFileSync
|
||||||
|
.mockReturnValueOnce('beadface\nbase123\n')
|
||||||
|
.mockReturnValueOnce('takt: add search feature\n');
|
||||||
|
|
||||||
const result = getOriginalInstruction('/project', 'main', 'takt/20260128-task');
|
const result = getOriginalInstruction('/project', 'main', 'takt/20260128-task');
|
||||||
|
|
||||||
expect(result).toBe('add search feature');
|
expect(result).toBe('add search feature');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should use correct git range with custom default branch', () => {
|
it('should return original subject when branch entry commit has no takt prefix', () => {
|
||||||
mockExecFileSync.mockReturnValue('takt: Add search feature\n');
|
mockExecFileSync
|
||||||
|
.mockReturnValueOnce('last789\nfirst456\nbase123\n')
|
||||||
|
.mockReturnValueOnce('Initial implementation\n');
|
||||||
|
|
||||||
getOriginalInstruction('/project', 'master', 'takt/20260128-add-search');
|
const result = getOriginalInstruction('/project', 'main', 'takt/20260128-fix-auth');
|
||||||
|
|
||||||
expect(mockExecFileSync).toHaveBeenCalledWith(
|
expect(result).toBe('Initial implementation');
|
||||||
'git',
|
|
||||||
['log', '--format=%s', '--reverse', 'master..takt/20260128-add-search'],
|
|
||||||
expect.objectContaining({ cwd: '/project' }),
|
|
||||||
);
|
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
117
src/__tests__/global-pieceCategories.test.ts
Normal file
117
src/__tests__/global-pieceCategories.test.ts
Normal file
@ -0,0 +1,117 @@
|
|||||||
|
/**
|
||||||
|
* Tests for global piece category path resolution.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { existsSync, mkdirSync, mkdtempSync, readFileSync, rmSync, writeFileSync } from 'node:fs';
|
||||||
|
import { tmpdir } from 'node:os';
|
||||||
|
import { dirname, join } from 'node:path';
|
||||||
|
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||||
|
|
||||||
|
const loadGlobalConfigMock = vi.hoisted(() => vi.fn());
|
||||||
|
|
||||||
|
vi.mock('../infra/config/paths.js', () => ({
|
||||||
|
getGlobalConfigDir: () => '/tmp/.takt',
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
||||||
|
loadGlobalConfig: loadGlobalConfigMock,
|
||||||
|
}));
|
||||||
|
|
||||||
|
const { getPieceCategoriesPath, resetPieceCategories } = await import(
|
||||||
|
'../infra/config/global/pieceCategories.js'
|
||||||
|
);
|
||||||
|
|
||||||
|
function createTempCategoriesPath(): string {
|
||||||
|
const tempRoot = mkdtempSync(join(tmpdir(), 'takt-piece-categories-'));
|
||||||
|
return join(tempRoot, 'preferences', 'piece-categories.yaml');
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('getPieceCategoriesPath', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
loadGlobalConfigMock.mockReset();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return configured path when pieceCategoriesFile is set', () => {
|
||||||
|
// Given
|
||||||
|
loadGlobalConfigMock.mockReturnValue({
|
||||||
|
pieceCategoriesFile: '/custom/piece-categories.yaml',
|
||||||
|
});
|
||||||
|
|
||||||
|
// When
|
||||||
|
const path = getPieceCategoriesPath();
|
||||||
|
|
||||||
|
// Then
|
||||||
|
expect(path).toBe('/custom/piece-categories.yaml');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return default path when pieceCategoriesFile is not set', () => {
|
||||||
|
// Given
|
||||||
|
loadGlobalConfigMock.mockReturnValue({});
|
||||||
|
|
||||||
|
// When
|
||||||
|
const path = getPieceCategoriesPath();
|
||||||
|
|
||||||
|
// Then
|
||||||
|
expect(path).toBe('/tmp/.takt/preferences/piece-categories.yaml');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should rethrow when global config loading fails', () => {
|
||||||
|
// Given
|
||||||
|
loadGlobalConfigMock.mockImplementation(() => {
|
||||||
|
throw new Error('invalid global config');
|
||||||
|
});
|
||||||
|
|
||||||
|
// When / Then
|
||||||
|
expect(() => getPieceCategoriesPath()).toThrow('invalid global config');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('resetPieceCategories', () => {
|
||||||
|
const tempRoots: string[] = [];
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
loadGlobalConfigMock.mockReset();
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
for (const tempRoot of tempRoots) {
|
||||||
|
rmSync(tempRoot, { recursive: true, force: true });
|
||||||
|
}
|
||||||
|
tempRoots.length = 0;
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should create parent directory and initialize with empty user categories', () => {
|
||||||
|
// Given
|
||||||
|
const categoriesPath = createTempCategoriesPath();
|
||||||
|
tempRoots.push(dirname(dirname(categoriesPath)));
|
||||||
|
loadGlobalConfigMock.mockReturnValue({
|
||||||
|
pieceCategoriesFile: categoriesPath,
|
||||||
|
});
|
||||||
|
|
||||||
|
// When
|
||||||
|
resetPieceCategories();
|
||||||
|
|
||||||
|
// Then
|
||||||
|
expect(existsSync(dirname(categoriesPath))).toBe(true);
|
||||||
|
expect(readFileSync(categoriesPath, 'utf-8')).toBe('piece_categories: {}\n');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should overwrite existing file with empty user categories', () => {
|
||||||
|
// Given
|
||||||
|
const categoriesPath = createTempCategoriesPath();
|
||||||
|
const categoriesDir = dirname(categoriesPath);
|
||||||
|
const tempRoot = dirname(categoriesDir);
|
||||||
|
tempRoots.push(tempRoot);
|
||||||
|
loadGlobalConfigMock.mockReturnValue({
|
||||||
|
pieceCategoriesFile: categoriesPath,
|
||||||
|
});
|
||||||
|
mkdirSync(categoriesDir, { recursive: true });
|
||||||
|
writeFileSync(categoriesPath, 'piece_categories:\n old:\n - stale-piece\n', 'utf-8');
|
||||||
|
|
||||||
|
// When
|
||||||
|
resetPieceCategories();
|
||||||
|
|
||||||
|
// Then
|
||||||
|
expect(readFileSync(categoriesPath, 'utf-8')).toBe('piece_categories: {}\n');
|
||||||
|
});
|
||||||
|
});
|
||||||
48
src/__tests__/instruction-context.test.ts
Normal file
48
src/__tests__/instruction-context.test.ts
Normal file
@ -0,0 +1,48 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for instruction-context
|
||||||
|
*
|
||||||
|
* Tests buildEditRule function for localized edit permission messages.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import { buildEditRule } from '../core/piece/instruction/instruction-context.js';
|
||||||
|
|
||||||
|
describe('buildEditRule', () => {
|
||||||
|
describe('edit = true', () => {
|
||||||
|
it('should return English editing-enabled message', () => {
|
||||||
|
const result = buildEditRule(true, 'en');
|
||||||
|
expect(result).toContain('Editing is ENABLED');
|
||||||
|
expect(result).toContain('create, modify, and delete files');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return Japanese editing-enabled message', () => {
|
||||||
|
const result = buildEditRule(true, 'ja');
|
||||||
|
expect(result).toContain('編集が許可されています');
|
||||||
|
expect(result).toContain('ファイルの作成・変更・削除');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('edit = false', () => {
|
||||||
|
it('should return English editing-disabled message', () => {
|
||||||
|
const result = buildEditRule(false, 'en');
|
||||||
|
expect(result).toContain('Editing is DISABLED');
|
||||||
|
expect(result).toContain('Do NOT create, modify, or delete');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return Japanese editing-disabled message', () => {
|
||||||
|
const result = buildEditRule(false, 'ja');
|
||||||
|
expect(result).toContain('編集が禁止されています');
|
||||||
|
expect(result).toContain('作成・変更・削除しないで');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('edit = undefined', () => {
|
||||||
|
it('should return empty string for English', () => {
|
||||||
|
expect(buildEditRule(undefined, 'en')).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return empty string for Japanese', () => {
|
||||||
|
expect(buildEditRule(undefined, 'ja')).toBe('');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
135
src/__tests__/instruction-helpers.test.ts
Normal file
135
src/__tests__/instruction-helpers.test.ts
Normal file
@ -0,0 +1,135 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for InstructionBuilder helper functions
|
||||||
|
*
|
||||||
|
* Tests isOutputContractItem, renderReportContext, and renderReportOutputInstruction.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import {
|
||||||
|
isOutputContractItem,
|
||||||
|
renderReportContext,
|
||||||
|
renderReportOutputInstruction,
|
||||||
|
} from '../core/piece/instruction/InstructionBuilder.js';
|
||||||
|
import type { PieceMovement, OutputContractEntry } from '../core/models/types.js';
|
||||||
|
import type { InstructionContext } from '../core/piece/instruction/instruction-context.js';
|
||||||
|
|
||||||
|
function makeMovement(overrides: Partial<PieceMovement> = {}): PieceMovement {
|
||||||
|
return {
|
||||||
|
name: 'test-movement',
|
||||||
|
personaDisplayName: 'tester',
|
||||||
|
instructionTemplate: '',
|
||||||
|
passPreviousResponse: false,
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeContext(overrides: Partial<InstructionContext> = {}): InstructionContext {
|
||||||
|
return {
|
||||||
|
task: 'test task',
|
||||||
|
iteration: 1,
|
||||||
|
maxIterations: 10,
|
||||||
|
movementIteration: 1,
|
||||||
|
cwd: '/tmp/test',
|
||||||
|
projectCwd: '/tmp/project',
|
||||||
|
userInputs: [],
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('isOutputContractItem', () => {
|
||||||
|
it('should return true for OutputContractItem (has name)', () => {
|
||||||
|
expect(isOutputContractItem({ name: 'report.md' })).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true for OutputContractItem with order/format', () => {
|
||||||
|
expect(isOutputContractItem({ name: 'report.md', order: 'Output to file', format: 'markdown' })).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false for OutputContractLabelPath (has label and path)', () => {
|
||||||
|
expect(isOutputContractItem({ label: 'Report', path: 'report.md' })).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('renderReportContext', () => {
|
||||||
|
it('should render single OutputContractItem', () => {
|
||||||
|
const contracts: OutputContractEntry[] = [{ name: '00-plan.md' }];
|
||||||
|
const result = renderReportContext(contracts, '/tmp/reports');
|
||||||
|
|
||||||
|
expect(result).toContain('Report Directory: /tmp/reports/');
|
||||||
|
expect(result).toContain('Report File: /tmp/reports/00-plan.md');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should render single OutputContractLabelPath', () => {
|
||||||
|
const contracts: OutputContractEntry[] = [{ label: 'Plan', path: 'plan.md' }];
|
||||||
|
const result = renderReportContext(contracts, '/tmp/reports');
|
||||||
|
|
||||||
|
expect(result).toContain('Report Directory: /tmp/reports/');
|
||||||
|
expect(result).toContain('Report File: /tmp/reports/plan.md');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should render multiple contracts as list', () => {
|
||||||
|
const contracts: OutputContractEntry[] = [
|
||||||
|
{ name: '00-plan.md' },
|
||||||
|
{ label: 'Review', path: '01-review.md' },
|
||||||
|
];
|
||||||
|
const result = renderReportContext(contracts, '/tmp/reports');
|
||||||
|
|
||||||
|
expect(result).toContain('Report Directory: /tmp/reports/');
|
||||||
|
expect(result).toContain('Report Files:');
|
||||||
|
expect(result).toContain('00-plan.md: /tmp/reports/00-plan.md');
|
||||||
|
expect(result).toContain('Review: /tmp/reports/01-review.md');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('renderReportOutputInstruction', () => {
|
||||||
|
it('should return empty string when no output contracts', () => {
|
||||||
|
const step = makeMovement();
|
||||||
|
const ctx = makeContext({ reportDir: '/tmp/reports' });
|
||||||
|
expect(renderReportOutputInstruction(step, ctx, 'en')).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return empty string when no reportDir', () => {
|
||||||
|
const step = makeMovement({ outputContracts: [{ name: 'report.md' }] });
|
||||||
|
const ctx = makeContext();
|
||||||
|
expect(renderReportOutputInstruction(step, ctx, 'en')).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should render English single-file instruction', () => {
|
||||||
|
const step = makeMovement({ outputContracts: [{ name: 'report.md' }] });
|
||||||
|
const ctx = makeContext({ reportDir: '/tmp/reports', movementIteration: 2 });
|
||||||
|
|
||||||
|
const result = renderReportOutputInstruction(step, ctx, 'en');
|
||||||
|
expect(result).toContain('Report output');
|
||||||
|
expect(result).toContain('Report File');
|
||||||
|
expect(result).toContain('Iteration 2');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should render English multi-file instruction', () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
outputContracts: [{ name: 'plan.md' }, { name: 'review.md' }],
|
||||||
|
});
|
||||||
|
const ctx = makeContext({ reportDir: '/tmp/reports' });
|
||||||
|
|
||||||
|
const result = renderReportOutputInstruction(step, ctx, 'en');
|
||||||
|
expect(result).toContain('Report Files');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should render Japanese single-file instruction', () => {
|
||||||
|
const step = makeMovement({ outputContracts: [{ name: 'report.md' }] });
|
||||||
|
const ctx = makeContext({ reportDir: '/tmp/reports', movementIteration: 1 });
|
||||||
|
|
||||||
|
const result = renderReportOutputInstruction(step, ctx, 'ja');
|
||||||
|
expect(result).toContain('レポート出力');
|
||||||
|
expect(result).toContain('Report File');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should render Japanese multi-file instruction', () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
outputContracts: [{ name: 'plan.md' }, { name: 'review.md' }],
|
||||||
|
});
|
||||||
|
const ctx = makeContext({ reportDir: '/tmp/reports' });
|
||||||
|
|
||||||
|
const result = renderReportOutputInstruction(step, ctx, 'ja');
|
||||||
|
expect(result).toContain('Report Files');
|
||||||
|
});
|
||||||
|
});
|
||||||
@ -14,10 +14,11 @@ import { join } from 'node:path';
|
|||||||
import { tmpdir } from 'node:os';
|
import { tmpdir } from 'node:os';
|
||||||
|
|
||||||
// --- Mocks ---
|
// --- Mocks ---
|
||||||
|
const languageState = vi.hoisted(() => ({ value: 'en' as 'en' | 'ja' }));
|
||||||
|
|
||||||
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
||||||
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
loadGlobalConfig: vi.fn().mockReturnValue({}),
|
||||||
getLanguage: vi.fn().mockReturnValue('en'),
|
getLanguage: vi.fn(() => languageState.value),
|
||||||
getDisabledBuiltins: vi.fn().mockReturnValue([]),
|
getDisabledBuiltins: vi.fn().mockReturnValue([]),
|
||||||
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
||||||
}));
|
}));
|
||||||
@ -40,6 +41,7 @@ describe('Piece Loader IT: builtin piece loading', () => {
|
|||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
testDir = createTestDir();
|
testDir = createTestDir();
|
||||||
|
languageState.value = 'en';
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
@ -64,6 +66,39 @@ describe('Piece Loader IT: builtin piece loading', () => {
|
|||||||
const config = loadPiece('non-existent-piece-xyz', testDir);
|
const config = loadPiece('non-existent-piece-xyz', testDir);
|
||||||
expect(config).toBeNull();
|
expect(config).toBeNull();
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should include and load e2e-test as a builtin piece', () => {
|
||||||
|
expect(builtinNames).toContain('e2e-test');
|
||||||
|
|
||||||
|
const config = loadPiece('e2e-test', testDir);
|
||||||
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
|
const planMovement = config!.movements.find((movement) => movement.name === 'plan_test');
|
||||||
|
const implementMovement = config!.movements.find((movement) => movement.name === 'implement_test');
|
||||||
|
|
||||||
|
expect(planMovement).toBeDefined();
|
||||||
|
expect(implementMovement).toBeDefined();
|
||||||
|
expect(planMovement!.instructionTemplate).toContain('missing E2E tests');
|
||||||
|
expect(implementMovement!.instructionTemplate).toContain('npm run test:e2e:mock');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should load e2e-test as a builtin piece in ja locale', () => {
|
||||||
|
languageState.value = 'ja';
|
||||||
|
|
||||||
|
const jaBuiltinNames = listBuiltinPieceNames({ includeDisabled: true });
|
||||||
|
expect(jaBuiltinNames).toContain('e2e-test');
|
||||||
|
|
||||||
|
const config = loadPiece('e2e-test', testDir);
|
||||||
|
expect(config).not.toBeNull();
|
||||||
|
|
||||||
|
const planMovement = config!.movements.find((movement) => movement.name === 'plan_test');
|
||||||
|
const implementMovement = config!.movements.find((movement) => movement.name === 'implement_test');
|
||||||
|
|
||||||
|
expect(planMovement).toBeDefined();
|
||||||
|
expect(implementMovement).toBeDefined();
|
||||||
|
expect(planMovement!.instructionTemplate).toContain('E2Eテスト');
|
||||||
|
expect(implementMovement!.instructionTemplate).toContain('npm run test:e2e:mock');
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Piece Loader IT: project-local piece override', () => {
|
describe('Piece Loader IT: project-local piece override', () => {
|
||||||
|
|||||||
180
src/__tests__/it-sigint-worker-pool.test.ts
Normal file
180
src/__tests__/it-sigint-worker-pool.test.ts
Normal file
@ -0,0 +1,180 @@
|
|||||||
|
/**
|
||||||
|
* Integration test: SIGINT abort signal propagation in worker pool.
|
||||||
|
*
|
||||||
|
* Verifies that:
|
||||||
|
* - AbortSignal is passed to tasks even when concurrency=1 (sequential mode)
|
||||||
|
* - Aborting the controller causes the signal to fire, enabling task interruption
|
||||||
|
* - The SIGINT handler in parallelExecution correctly aborts the controller
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||||
|
import type { TaskInfo } from '../infra/task/index.js';
|
||||||
|
|
||||||
|
vi.mock('../shared/ui/index.js', () => ({
|
||||||
|
header: vi.fn(),
|
||||||
|
info: vi.fn(),
|
||||||
|
warn: vi.fn(),
|
||||||
|
error: vi.fn(),
|
||||||
|
success: vi.fn(),
|
||||||
|
status: vi.fn(),
|
||||||
|
blankLine: vi.fn(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../shared/exitCodes.js', () => ({
|
||||||
|
EXIT_SIGINT: 130,
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../shared/i18n/index.js', () => ({
|
||||||
|
getLabel: vi.fn((key: string) => key),
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
||||||
|
...(await importOriginal<Record<string, unknown>>()),
|
||||||
|
createLogger: () => ({
|
||||||
|
info: vi.fn(),
|
||||||
|
debug: vi.fn(),
|
||||||
|
error: vi.fn(),
|
||||||
|
}),
|
||||||
|
}));
|
||||||
|
|
||||||
|
const mockExecuteAndCompleteTask = vi.fn();
|
||||||
|
|
||||||
|
vi.mock('../features/tasks/execute/taskExecution.js', () => ({
|
||||||
|
executeAndCompleteTask: (...args: unknown[]) => mockExecuteAndCompleteTask(...args),
|
||||||
|
}));
|
||||||
|
|
||||||
|
import { runWithWorkerPool } from '../features/tasks/execute/parallelExecution.js';
|
||||||
|
|
||||||
|
function createTask(name: string): TaskInfo {
|
||||||
|
return {
|
||||||
|
name,
|
||||||
|
content: `Task: ${name}`,
|
||||||
|
filePath: `/tasks/${name}.yaml`,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function createMockTaskRunner() {
|
||||||
|
return {
|
||||||
|
getNextTask: vi.fn(() => null),
|
||||||
|
claimNextTasks: vi.fn(() => []),
|
||||||
|
completeTask: vi.fn(),
|
||||||
|
failTask: vi.fn(),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.clearAllMocks();
|
||||||
|
mockExecuteAndCompleteTask.mockResolvedValue(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('worker pool: abort signal propagation', () => {
|
||||||
|
let savedSigintListeners: ((...args: unknown[]) => void)[];
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
savedSigintListeners = process.rawListeners('SIGINT') as ((...args: unknown[]) => void)[];
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
process.removeAllListeners('SIGINT');
|
||||||
|
for (const listener of savedSigintListeners) {
|
||||||
|
process.on('SIGINT', listener as NodeJS.SignalsListener);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should pass abortSignal to tasks in sequential mode (concurrency=1)', async () => {
|
||||||
|
// Given
|
||||||
|
const tasks = [createTask('task-1')];
|
||||||
|
const runner = createMockTaskRunner();
|
||||||
|
const receivedSignals: (AbortSignal | undefined)[] = [];
|
||||||
|
|
||||||
|
mockExecuteAndCompleteTask.mockImplementation(
|
||||||
|
(_task: unknown, _runner: unknown, _cwd: unknown, _piece: unknown, _opts: unknown, parallelOpts: { abortSignal?: AbortSignal }) => {
|
||||||
|
receivedSignals.push(parallelOpts?.abortSignal);
|
||||||
|
return Promise.resolve(true);
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
// When
|
||||||
|
await runWithWorkerPool(runner as never, tasks, 1, '/cwd', 'default', undefined, 50);
|
||||||
|
|
||||||
|
// Then: AbortSignal is passed even with concurrency=1
|
||||||
|
expect(receivedSignals).toHaveLength(1);
|
||||||
|
expect(receivedSignals[0]).toBeInstanceOf(AbortSignal);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should abort the signal when SIGINT fires in sequential mode', async () => {
|
||||||
|
// Given
|
||||||
|
const tasks = [createTask('long-task')];
|
||||||
|
const runner = createMockTaskRunner();
|
||||||
|
let capturedSignal: AbortSignal | undefined;
|
||||||
|
|
||||||
|
mockExecuteAndCompleteTask.mockImplementation(
|
||||||
|
(_task: unknown, _runner: unknown, _cwd: unknown, _piece: unknown, _opts: unknown, parallelOpts: { abortSignal?: AbortSignal }) => {
|
||||||
|
capturedSignal = parallelOpts?.abortSignal;
|
||||||
|
return new Promise((resolve) => {
|
||||||
|
// Wait long enough for SIGINT to fire
|
||||||
|
setTimeout(() => resolve(true), 200);
|
||||||
|
});
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
// Start execution
|
||||||
|
const resultPromise = runWithWorkerPool(runner as never, tasks, 1, '/cwd', 'default', undefined, 50);
|
||||||
|
|
||||||
|
// Wait for task to start
|
||||||
|
await new Promise((resolve) => setTimeout(resolve, 20));
|
||||||
|
|
||||||
|
// Find the SIGINT handler added by runWithWorkerPool
|
||||||
|
const allListeners = process.rawListeners('SIGINT') as ((...args: unknown[]) => void)[];
|
||||||
|
const newListener = allListeners.find((l) => !savedSigintListeners.includes(l));
|
||||||
|
expect(newListener).toBeDefined();
|
||||||
|
|
||||||
|
// Simulate SIGINT
|
||||||
|
newListener!();
|
||||||
|
|
||||||
|
// Wait for execution to complete
|
||||||
|
await resultPromise;
|
||||||
|
|
||||||
|
// Then: The abort signal should have been triggered
|
||||||
|
expect(capturedSignal).toBeInstanceOf(AbortSignal);
|
||||||
|
expect(capturedSignal!.aborted).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should share the same AbortSignal across sequential and parallel tasks', async () => {
|
||||||
|
// Given: Multiple tasks in both sequential (concurrency=1) and parallel (concurrency=2)
|
||||||
|
const tasks = [createTask('t1'), createTask('t2')];
|
||||||
|
const runner = createMockTaskRunner();
|
||||||
|
|
||||||
|
const receivedSignalsSeq: (AbortSignal | undefined)[] = [];
|
||||||
|
const receivedSignalsPar: (AbortSignal | undefined)[] = [];
|
||||||
|
|
||||||
|
mockExecuteAndCompleteTask.mockImplementation(
|
||||||
|
(_task: unknown, _runner: unknown, _cwd: unknown, _piece: unknown, _opts: unknown, parallelOpts: { abortSignal?: AbortSignal }) => {
|
||||||
|
receivedSignalsSeq.push(parallelOpts?.abortSignal);
|
||||||
|
return Promise.resolve(true);
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
// Sequential mode
|
||||||
|
await runWithWorkerPool(runner as never, [...tasks], 1, '/cwd', 'default', undefined, 50);
|
||||||
|
|
||||||
|
mockExecuteAndCompleteTask.mockClear();
|
||||||
|
mockExecuteAndCompleteTask.mockImplementation(
|
||||||
|
(_task: unknown, _runner: unknown, _cwd: unknown, _piece: unknown, _opts: unknown, parallelOpts: { abortSignal?: AbortSignal }) => {
|
||||||
|
receivedSignalsPar.push(parallelOpts?.abortSignal);
|
||||||
|
return Promise.resolve(true);
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
// Parallel mode
|
||||||
|
await runWithWorkerPool(runner as never, [...tasks], 2, '/cwd', 'default', undefined, 50);
|
||||||
|
|
||||||
|
// Then: Both modes pass AbortSignal
|
||||||
|
for (const signal of receivedSignalsSeq) {
|
||||||
|
expect(signal).toBeInstanceOf(AbortSignal);
|
||||||
|
}
|
||||||
|
for (const signal of receivedSignalsPar) {
|
||||||
|
expect(signal).toBeInstanceOf(AbortSignal);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
204
src/__tests__/judgment-strategies.test.ts
Normal file
204
src/__tests__/judgment-strategies.test.ts
Normal file
@ -0,0 +1,204 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for FallbackStrategy judgment strategies
|
||||||
|
*
|
||||||
|
* Tests AutoSelectStrategy and canApply logic for all strategies.
|
||||||
|
* Strategies requiring external agent calls (ReportBased, ResponseBased,
|
||||||
|
* AgentConsult) are tested for canApply and input validation only.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import {
|
||||||
|
AutoSelectStrategy,
|
||||||
|
ReportBasedStrategy,
|
||||||
|
ResponseBasedStrategy,
|
||||||
|
AgentConsultStrategy,
|
||||||
|
JudgmentStrategyFactory,
|
||||||
|
type JudgmentContext,
|
||||||
|
} from '../core/piece/judgment/FallbackStrategy.js';
|
||||||
|
import type { PieceMovement } from '../core/models/types.js';
|
||||||
|
|
||||||
|
function makeMovement(overrides: Partial<PieceMovement> = {}): PieceMovement {
|
||||||
|
return {
|
||||||
|
name: 'test-movement',
|
||||||
|
personaDisplayName: 'tester',
|
||||||
|
instructionTemplate: '',
|
||||||
|
passPreviousResponse: false,
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeContext(overrides: Partial<JudgmentContext> = {}): JudgmentContext {
|
||||||
|
return {
|
||||||
|
step: makeMovement(),
|
||||||
|
cwd: '/tmp/test',
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('AutoSelectStrategy', () => {
|
||||||
|
const strategy = new AutoSelectStrategy();
|
||||||
|
|
||||||
|
it('should have name "AutoSelect"', () => {
|
||||||
|
expect(strategy.name).toBe('AutoSelect');
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('canApply', () => {
|
||||||
|
it('should return true when movement has exactly one rule', () => {
|
||||||
|
const ctx = makeContext({
|
||||||
|
step: makeMovement({
|
||||||
|
rules: [{ condition: 'done', next: 'COMPLETE' }],
|
||||||
|
}),
|
||||||
|
});
|
||||||
|
expect(strategy.canApply(ctx)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when movement has multiple rules', () => {
|
||||||
|
const ctx = makeContext({
|
||||||
|
step: makeMovement({
|
||||||
|
rules: [
|
||||||
|
{ condition: 'approved', next: 'implement' },
|
||||||
|
{ condition: 'rejected', next: 'review' },
|
||||||
|
],
|
||||||
|
}),
|
||||||
|
});
|
||||||
|
expect(strategy.canApply(ctx)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when movement has no rules', () => {
|
||||||
|
const ctx = makeContext({
|
||||||
|
step: makeMovement({ rules: undefined }),
|
||||||
|
});
|
||||||
|
expect(strategy.canApply(ctx)).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('execute', () => {
|
||||||
|
it('should return auto-selected tag for single-branch movement', async () => {
|
||||||
|
const ctx = makeContext({
|
||||||
|
step: makeMovement({
|
||||||
|
name: 'review',
|
||||||
|
rules: [{ condition: 'done', next: 'COMPLETE' }],
|
||||||
|
}),
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = await strategy.execute(ctx);
|
||||||
|
expect(result.success).toBe(true);
|
||||||
|
expect(result.tag).toBe('[REVIEW:1]');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('ReportBasedStrategy', () => {
|
||||||
|
const strategy = new ReportBasedStrategy();
|
||||||
|
|
||||||
|
it('should have name "ReportBased"', () => {
|
||||||
|
expect(strategy.name).toBe('ReportBased');
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('canApply', () => {
|
||||||
|
it('should return true when reportDir and outputContracts are present', () => {
|
||||||
|
const ctx = makeContext({
|
||||||
|
reportDir: '/tmp/reports',
|
||||||
|
step: makeMovement({
|
||||||
|
outputContracts: [{ name: 'report.md' }],
|
||||||
|
}),
|
||||||
|
});
|
||||||
|
expect(strategy.canApply(ctx)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when reportDir is missing', () => {
|
||||||
|
const ctx = makeContext({
|
||||||
|
step: makeMovement({
|
||||||
|
outputContracts: [{ name: 'report.md' }],
|
||||||
|
}),
|
||||||
|
});
|
||||||
|
expect(strategy.canApply(ctx)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when outputContracts is empty', () => {
|
||||||
|
const ctx = makeContext({
|
||||||
|
reportDir: '/tmp/reports',
|
||||||
|
step: makeMovement({ outputContracts: [] }),
|
||||||
|
});
|
||||||
|
expect(strategy.canApply(ctx)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when outputContracts is undefined', () => {
|
||||||
|
const ctx = makeContext({
|
||||||
|
reportDir: '/tmp/reports',
|
||||||
|
step: makeMovement(),
|
||||||
|
});
|
||||||
|
expect(strategy.canApply(ctx)).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('ResponseBasedStrategy', () => {
|
||||||
|
const strategy = new ResponseBasedStrategy();
|
||||||
|
|
||||||
|
it('should have name "ResponseBased"', () => {
|
||||||
|
expect(strategy.name).toBe('ResponseBased');
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('canApply', () => {
|
||||||
|
it('should return true when lastResponse is non-empty', () => {
|
||||||
|
const ctx = makeContext({ lastResponse: 'some response' });
|
||||||
|
expect(strategy.canApply(ctx)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when lastResponse is undefined', () => {
|
||||||
|
const ctx = makeContext({ lastResponse: undefined });
|
||||||
|
expect(strategy.canApply(ctx)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when lastResponse is empty string', () => {
|
||||||
|
const ctx = makeContext({ lastResponse: '' });
|
||||||
|
expect(strategy.canApply(ctx)).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('AgentConsultStrategy', () => {
|
||||||
|
const strategy = new AgentConsultStrategy();
|
||||||
|
|
||||||
|
it('should have name "AgentConsult"', () => {
|
||||||
|
expect(strategy.name).toBe('AgentConsult');
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('canApply', () => {
|
||||||
|
it('should return true when sessionId is non-empty', () => {
|
||||||
|
const ctx = makeContext({ sessionId: 'session-123' });
|
||||||
|
expect(strategy.canApply(ctx)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when sessionId is undefined', () => {
|
||||||
|
const ctx = makeContext({ sessionId: undefined });
|
||||||
|
expect(strategy.canApply(ctx)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when sessionId is empty string', () => {
|
||||||
|
const ctx = makeContext({ sessionId: '' });
|
||||||
|
expect(strategy.canApply(ctx)).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('execute', () => {
|
||||||
|
it('should return failure when sessionId is not provided', async () => {
|
||||||
|
const ctx = makeContext({ sessionId: undefined });
|
||||||
|
const result = await strategy.execute(ctx);
|
||||||
|
expect(result.success).toBe(false);
|
||||||
|
expect(result.reason).toBe('Session ID not provided');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('JudgmentStrategyFactory', () => {
|
||||||
|
it('should create strategies in correct priority order', () => {
|
||||||
|
const strategies = JudgmentStrategyFactory.createStrategies();
|
||||||
|
expect(strategies).toHaveLength(4);
|
||||||
|
expect(strategies[0]!.name).toBe('AutoSelect');
|
||||||
|
expect(strategies[1]!.name).toBe('ReportBased');
|
||||||
|
expect(strategies[2]!.name).toBe('ResponseBased');
|
||||||
|
expect(strategies[3]!.name).toBe('AgentConsult');
|
||||||
|
});
|
||||||
|
});
|
||||||
@ -1,185 +1,80 @@
|
|||||||
/**
|
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||||
* Tests for listNonInteractive — non-interactive list output and branch actions.
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { execFileSync } from 'node:child_process';
|
|
||||||
import * as fs from 'node:fs';
|
import * as fs from 'node:fs';
|
||||||
import * as path from 'node:path';
|
import * as path from 'node:path';
|
||||||
import * as os from 'node:os';
|
import * as os from 'node:os';
|
||||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
import { stringify as stringifyYaml } from 'yaml';
|
||||||
import { listTasks } from '../features/tasks/list/index.js';
|
import { listTasksNonInteractive } from '../features/tasks/list/listNonInteractive.js';
|
||||||
|
|
||||||
describe('listTasks non-interactive text output', () => {
|
const mockInfo = vi.fn();
|
||||||
let tmpDir: string;
|
vi.mock('../shared/ui/index.js', () => ({
|
||||||
|
info: (...args: unknown[]) => mockInfo(...args),
|
||||||
|
}));
|
||||||
|
|
||||||
beforeEach(() => {
|
vi.mock('../infra/task/branchList.js', async (importOriginal) => ({
|
||||||
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'takt-test-ni-'));
|
...(await importOriginal<Record<string, unknown>>()),
|
||||||
execFileSync('git', ['init', '--initial-branch', 'main'], { cwd: tmpDir, stdio: 'pipe' });
|
detectDefaultBranch: vi.fn(() => 'main'),
|
||||||
execFileSync('git', ['config', 'user.name', 'Test User'], { cwd: tmpDir, stdio: 'pipe' });
|
listTaktBranches: vi.fn(() => []),
|
||||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: tmpDir, stdio: 'pipe' });
|
buildListItems: vi.fn(() => []),
|
||||||
execFileSync('git', ['commit', '--allow-empty', '-m', 'init'], { cwd: tmpDir, stdio: 'pipe' });
|
}));
|
||||||
|
|
||||||
|
let tmpDir: string;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.clearAllMocks();
|
||||||
|
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'takt-list-non-interactive-'));
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
fs.rmSync(tmpDir, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
function writeTasksFile(projectDir: string): void {
|
||||||
|
const tasksFile = path.join(projectDir, '.takt', 'tasks.yaml');
|
||||||
|
fs.mkdirSync(path.dirname(tasksFile), { recursive: true });
|
||||||
|
fs.writeFileSync(tasksFile, stringifyYaml({
|
||||||
|
tasks: [
|
||||||
|
{
|
||||||
|
name: 'pending-task',
|
||||||
|
status: 'pending',
|
||||||
|
content: 'Pending content',
|
||||||
|
created_at: '2026-02-09T00:00:00.000Z',
|
||||||
|
started_at: null,
|
||||||
|
completed_at: null,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: 'failed-task',
|
||||||
|
status: 'failed',
|
||||||
|
content: 'Failed content',
|
||||||
|
created_at: '2026-02-09T00:00:00.000Z',
|
||||||
|
started_at: '2026-02-09T00:01:00.000Z',
|
||||||
|
completed_at: '2026-02-09T00:02:00.000Z',
|
||||||
|
failure: { error: 'Boom' },
|
||||||
|
},
|
||||||
|
],
|
||||||
|
}), 'utf-8');
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('listTasksNonInteractive', () => {
|
||||||
|
it('should output pending and failed tasks in text format', async () => {
|
||||||
|
writeTasksFile(tmpDir);
|
||||||
|
|
||||||
|
await listTasksNonInteractive(tmpDir, { enabled: true, format: 'text' });
|
||||||
|
|
||||||
|
expect(mockInfo).toHaveBeenCalledWith(expect.stringContaining('[running] pending-task'));
|
||||||
|
expect(mockInfo).toHaveBeenCalledWith(expect.stringContaining('[failed] failed-task'));
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(() => {
|
it('should output JSON when format=json', async () => {
|
||||||
fs.rmSync(tmpDir, { recursive: true, force: true });
|
writeTasksFile(tmpDir);
|
||||||
});
|
const logSpy = vi.spyOn(console, 'log').mockImplementation(() => undefined);
|
||||||
|
|
||||||
it('should output pending tasks in text format', async () => {
|
await listTasksNonInteractive(tmpDir, { enabled: true, format: 'json' });
|
||||||
// Given
|
|
||||||
const tasksDir = path.join(tmpDir, '.takt', 'tasks');
|
|
||||||
fs.mkdirSync(tasksDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(tasksDir, 'my-task.md'), 'Fix the login bug');
|
|
||||||
|
|
||||||
const logSpy = vi.spyOn(console, 'log').mockImplementation(() => {});
|
expect(logSpy).toHaveBeenCalledTimes(1);
|
||||||
|
const payload = JSON.parse(logSpy.mock.calls[0]![0] as string) as { pendingTasks: Array<{ name: string }>; failedTasks: Array<{ name: string }> };
|
||||||
|
expect(payload.pendingTasks[0]?.name).toBe('pending-task');
|
||||||
|
expect(payload.failedTasks[0]?.name).toBe('failed-task');
|
||||||
|
|
||||||
// When
|
|
||||||
await listTasks(tmpDir, undefined, { enabled: true });
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const calls = logSpy.mock.calls.map((c) => c[0] as string);
|
|
||||||
expect(calls).toContainEqual(expect.stringContaining('[pending] my-task'));
|
|
||||||
expect(calls).toContainEqual(expect.stringContaining('Fix the login bug'));
|
|
||||||
logSpy.mockRestore();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should output failed tasks in text format', async () => {
|
|
||||||
// Given
|
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed', '2025-01-15T12-34-56_failed-task');
|
|
||||||
fs.mkdirSync(failedDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(failedDir, 'failed-task.md'), 'This failed');
|
|
||||||
|
|
||||||
const logSpy = vi.spyOn(console, 'log').mockImplementation(() => {});
|
|
||||||
|
|
||||||
// When
|
|
||||||
await listTasks(tmpDir, undefined, { enabled: true });
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const calls = logSpy.mock.calls.map((c) => c[0] as string);
|
|
||||||
expect(calls).toContainEqual(expect.stringContaining('[failed] failed-task'));
|
|
||||||
expect(calls).toContainEqual(expect.stringContaining('This failed'));
|
|
||||||
logSpy.mockRestore();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should output both pending and failed tasks in text format', async () => {
|
|
||||||
// Given
|
|
||||||
const tasksDir = path.join(tmpDir, '.takt', 'tasks');
|
|
||||||
fs.mkdirSync(tasksDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(tasksDir, 'pending-one.md'), 'Pending task');
|
|
||||||
|
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed', '2025-01-15T12-34-56_failed-one');
|
|
||||||
fs.mkdirSync(failedDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(failedDir, 'failed-one.md'), 'Failed task');
|
|
||||||
|
|
||||||
const logSpy = vi.spyOn(console, 'log').mockImplementation(() => {});
|
|
||||||
|
|
||||||
// When
|
|
||||||
await listTasks(tmpDir, undefined, { enabled: true });
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const calls = logSpy.mock.calls.map((c) => c[0] as string);
|
|
||||||
expect(calls).toContainEqual(expect.stringContaining('[pending] pending-one'));
|
|
||||||
expect(calls).toContainEqual(expect.stringContaining('[failed] failed-one'));
|
|
||||||
logSpy.mockRestore();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should show info message when no tasks exist', async () => {
|
|
||||||
// Given: no tasks, no branches
|
|
||||||
|
|
||||||
const logSpy = vi.spyOn(console, 'log').mockImplementation(() => {});
|
|
||||||
|
|
||||||
// When
|
|
||||||
await listTasks(tmpDir, undefined, { enabled: true });
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const calls = logSpy.mock.calls.map((c) => c[0] as string);
|
|
||||||
expect(calls.some((c) => c.includes('No tasks to list'))).toBe(true);
|
|
||||||
logSpy.mockRestore();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('listTasks non-interactive action errors', () => {
|
|
||||||
let tmpDir: string;
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'takt-test-ni-err-'));
|
|
||||||
execFileSync('git', ['init', '--initial-branch', 'main'], { cwd: tmpDir, stdio: 'pipe' });
|
|
||||||
execFileSync('git', ['config', 'user.name', 'Test User'], { cwd: tmpDir, stdio: 'pipe' });
|
|
||||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: tmpDir, stdio: 'pipe' });
|
|
||||||
execFileSync('git', ['commit', '--allow-empty', '-m', 'init'], { cwd: tmpDir, stdio: 'pipe' });
|
|
||||||
// Create a pending task so the "no tasks" early return is not triggered
|
|
||||||
const tasksDir = path.join(tmpDir, '.takt', 'tasks');
|
|
||||||
fs.mkdirSync(tasksDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(tasksDir, 'dummy.md'), 'dummy');
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
fs.rmSync(tmpDir, { recursive: true, force: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should exit with code 1 when --action specified without --branch', async () => {
|
|
||||||
// Given
|
|
||||||
const exitSpy = vi.spyOn(process, 'exit').mockImplementation(() => { throw new Error('process.exit'); });
|
|
||||||
const logSpy = vi.spyOn(console, 'log').mockImplementation(() => {});
|
|
||||||
|
|
||||||
// When / Then
|
|
||||||
await expect(
|
|
||||||
listTasks(tmpDir, undefined, { enabled: true, action: 'diff' }),
|
|
||||||
).rejects.toThrow('process.exit');
|
|
||||||
|
|
||||||
expect(exitSpy).toHaveBeenCalledWith(1);
|
|
||||||
exitSpy.mockRestore();
|
|
||||||
logSpy.mockRestore();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should exit with code 1 for invalid action', async () => {
|
|
||||||
// Given
|
|
||||||
const exitSpy = vi.spyOn(process, 'exit').mockImplementation(() => { throw new Error('process.exit'); });
|
|
||||||
const logSpy = vi.spyOn(console, 'log').mockImplementation(() => {});
|
|
||||||
|
|
||||||
// When / Then
|
|
||||||
await expect(
|
|
||||||
listTasks(tmpDir, undefined, { enabled: true, action: 'invalid', branch: 'some-branch' }),
|
|
||||||
).rejects.toThrow('process.exit');
|
|
||||||
|
|
||||||
expect(exitSpy).toHaveBeenCalledWith(1);
|
|
||||||
exitSpy.mockRestore();
|
|
||||||
logSpy.mockRestore();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should exit with code 1 when branch not found', async () => {
|
|
||||||
// Given
|
|
||||||
const exitSpy = vi.spyOn(process, 'exit').mockImplementation(() => { throw new Error('process.exit'); });
|
|
||||||
const logSpy = vi.spyOn(console, 'log').mockImplementation(() => {});
|
|
||||||
|
|
||||||
// When / Then
|
|
||||||
await expect(
|
|
||||||
listTasks(tmpDir, undefined, { enabled: true, action: 'diff', branch: 'takt/nonexistent' }),
|
|
||||||
).rejects.toThrow('process.exit');
|
|
||||||
|
|
||||||
expect(exitSpy).toHaveBeenCalledWith(1);
|
|
||||||
exitSpy.mockRestore();
|
|
||||||
logSpy.mockRestore();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should exit with code 1 for delete without --yes', async () => {
|
|
||||||
// Given: create a branch so it's found
|
|
||||||
execFileSync('git', ['checkout', '-b', 'takt/20250115-test-branch'], { cwd: tmpDir, stdio: 'pipe' });
|
|
||||||
execFileSync('git', ['checkout', 'main'], { cwd: tmpDir, stdio: 'pipe' });
|
|
||||||
|
|
||||||
const exitSpy = vi.spyOn(process, 'exit').mockImplementation(() => { throw new Error('process.exit'); });
|
|
||||||
const logSpy = vi.spyOn(console, 'log').mockImplementation(() => {});
|
|
||||||
|
|
||||||
// When / Then
|
|
||||||
await expect(
|
|
||||||
listTasks(tmpDir, undefined, {
|
|
||||||
enabled: true,
|
|
||||||
action: 'delete',
|
|
||||||
branch: 'takt/20250115-test-branch',
|
|
||||||
}),
|
|
||||||
).rejects.toThrow('process.exit');
|
|
||||||
|
|
||||||
expect(exitSpy).toHaveBeenCalledWith(1);
|
|
||||||
exitSpy.mockRestore();
|
|
||||||
logSpy.mockRestore();
|
logSpy.mockRestore();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@ -1,391 +1,94 @@
|
|||||||
/**
|
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||||
* Tests for list-tasks command
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { execFileSync } from 'node:child_process';
|
|
||||||
import * as fs from 'node:fs';
|
import * as fs from 'node:fs';
|
||||||
import * as path from 'node:path';
|
import * as path from 'node:path';
|
||||||
import * as os from 'node:os';
|
import * as os from 'node:os';
|
||||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
import { stringify as stringifyYaml } from 'yaml';
|
||||||
import {
|
|
||||||
parseTaktBranches,
|
vi.mock('../shared/ui/index.js', () => ({
|
||||||
extractTaskSlug,
|
info: vi.fn(),
|
||||||
buildListItems,
|
header: vi.fn(),
|
||||||
type BranchInfo,
|
blankLine: vi.fn(),
|
||||||
} from '../infra/task/branchList.js';
|
}));
|
||||||
|
|
||||||
|
vi.mock('../infra/task/branchList.js', async (importOriginal) => ({
|
||||||
|
...(await importOriginal<Record<string, unknown>>()),
|
||||||
|
listTaktBranches: vi.fn(() => []),
|
||||||
|
buildListItems: vi.fn(() => []),
|
||||||
|
detectDefaultBranch: vi.fn(() => 'main'),
|
||||||
|
}));
|
||||||
|
|
||||||
import { TaskRunner } from '../infra/task/runner.js';
|
import { TaskRunner } from '../infra/task/runner.js';
|
||||||
import type { TaskListItem } from '../infra/task/types.js';
|
import { listTasksNonInteractive } from '../features/tasks/list/listNonInteractive.js';
|
||||||
import { isBranchMerged, showFullDiff, type ListAction } from '../features/tasks/index.js';
|
|
||||||
import { listTasks } from '../features/tasks/list/index.js';
|
|
||||||
|
|
||||||
describe('parseTaktBranches', () => {
|
let tmpDir: string;
|
||||||
it('should parse takt/ branches from git branch output', () => {
|
|
||||||
const output = [
|
|
||||||
'takt/20260128-fix-auth def4567',
|
|
||||||
'takt/20260128-add-search 789abcd',
|
|
||||||
].join('\n');
|
|
||||||
|
|
||||||
const result = parseTaktBranches(output);
|
beforeEach(() => {
|
||||||
expect(result).toHaveLength(2);
|
vi.clearAllMocks();
|
||||||
|
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'takt-list-test-'));
|
||||||
expect(result[0]).toEqual({
|
|
||||||
branch: 'takt/20260128-fix-auth',
|
|
||||||
commit: 'def4567',
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(result[1]).toEqual({
|
|
||||||
branch: 'takt/20260128-add-search',
|
|
||||||
commit: '789abcd',
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should handle empty output', () => {
|
|
||||||
const result = parseTaktBranches('');
|
|
||||||
expect(result).toHaveLength(0);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should handle output with only whitespace lines', () => {
|
|
||||||
const result = parseTaktBranches(' \n \n');
|
|
||||||
expect(result).toHaveLength(0);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should handle single branch', () => {
|
|
||||||
const output = 'takt/20260128-fix-auth abc1234';
|
|
||||||
|
|
||||||
const result = parseTaktBranches(output);
|
|
||||||
expect(result).toHaveLength(1);
|
|
||||||
expect(result[0]).toEqual({
|
|
||||||
branch: 'takt/20260128-fix-auth',
|
|
||||||
commit: 'abc1234',
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should skip lines without space separator', () => {
|
|
||||||
const output = [
|
|
||||||
'takt/20260128-fix-auth abc1234',
|
|
||||||
'malformed-line',
|
|
||||||
].join('\n');
|
|
||||||
|
|
||||||
const result = parseTaktBranches(output);
|
|
||||||
expect(result).toHaveLength(1);
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('extractTaskSlug', () => {
|
afterEach(() => {
|
||||||
it('should extract slug from timestamped branch name', () => {
|
fs.rmSync(tmpDir, { recursive: true, force: true });
|
||||||
expect(extractTaskSlug('takt/20260128T032800-fix-auth')).toBe('fix-auth');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should extract slug from date-only timestamp', () => {
|
|
||||||
expect(extractTaskSlug('takt/20260128-add-search')).toBe('add-search');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should extract slug with long timestamp format', () => {
|
|
||||||
expect(extractTaskSlug('takt/20260128T032800-refactor-api')).toBe('refactor-api');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should handle branch without timestamp', () => {
|
|
||||||
expect(extractTaskSlug('takt/my-task')).toBe('my-task');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should handle branch with only timestamp', () => {
|
|
||||||
const result = extractTaskSlug('takt/20260128T032800');
|
|
||||||
// Timestamp is stripped, nothing left, falls back to original name
|
|
||||||
expect(result).toBe('20260128T032800');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should handle slug with multiple dashes', () => {
|
|
||||||
expect(extractTaskSlug('takt/20260128-fix-auth-bug-in-login')).toBe('fix-auth-bug-in-login');
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('buildListItems', () => {
|
function writeTasksFile(projectDir: string): void {
|
||||||
it('should build items with correct task slug and originalInstruction', () => {
|
const tasksFile = path.join(projectDir, '.takt', 'tasks.yaml');
|
||||||
const branches: BranchInfo[] = [
|
fs.mkdirSync(path.dirname(tasksFile), { recursive: true });
|
||||||
|
fs.writeFileSync(tasksFile, stringifyYaml({
|
||||||
|
tasks: [
|
||||||
{
|
{
|
||||||
branch: 'takt/20260128-fix-auth',
|
name: 'pending-one',
|
||||||
commit: 'abc123',
|
status: 'pending',
|
||||||
},
|
content: 'Pending task',
|
||||||
];
|
created_at: '2026-02-09T00:00:00.000Z',
|
||||||
|
started_at: null,
|
||||||
const items = buildListItems('/project', branches, 'main');
|
completed_at: null,
|
||||||
expect(items).toHaveLength(1);
|
|
||||||
expect(items[0]!.taskSlug).toBe('fix-auth');
|
|
||||||
expect(items[0]!.info).toBe(branches[0]);
|
|
||||||
// filesChanged will be 0 since we don't have a real git repo
|
|
||||||
expect(items[0]!.filesChanged).toBe(0);
|
|
||||||
// originalInstruction will be empty since git command fails on non-existent repo
|
|
||||||
expect(items[0]!.originalInstruction).toBe('');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should handle multiple branches', () => {
|
|
||||||
const branches: BranchInfo[] = [
|
|
||||||
{
|
|
||||||
branch: 'takt/20260128-fix-auth',
|
|
||||||
commit: 'abc123',
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
branch: 'takt/20260128-add-search',
|
name: 'failed-one',
|
||||||
commit: 'def456',
|
status: 'failed',
|
||||||
|
content: 'Failed task',
|
||||||
|
created_at: '2026-02-09T00:00:00.000Z',
|
||||||
|
started_at: '2026-02-09T00:01:00.000Z',
|
||||||
|
completed_at: '2026-02-09T00:02:00.000Z',
|
||||||
|
failure: { error: 'boom' },
|
||||||
},
|
},
|
||||||
];
|
],
|
||||||
|
}), 'utf-8');
|
||||||
|
}
|
||||||
|
|
||||||
const items = buildListItems('/project', branches, 'main');
|
describe('TaskRunner list APIs', () => {
|
||||||
expect(items).toHaveLength(2);
|
it('should read pending and failed tasks from tasks.yaml', () => {
|
||||||
expect(items[0]!.taskSlug).toBe('fix-auth');
|
writeTasksFile(tmpDir);
|
||||||
expect(items[1]!.taskSlug).toBe('add-search');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should handle empty branch list', () => {
|
|
||||||
const items = buildListItems('/project', [], 'main');
|
|
||||||
expect(items).toHaveLength(0);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('ListAction type', () => {
|
|
||||||
it('should include diff, instruct, try, merge, delete (no skip)', () => {
|
|
||||||
const actions: ListAction[] = ['diff', 'instruct', 'try', 'merge', 'delete'];
|
|
||||||
expect(actions).toHaveLength(5);
|
|
||||||
expect(actions).toContain('diff');
|
|
||||||
expect(actions).toContain('instruct');
|
|
||||||
expect(actions).toContain('try');
|
|
||||||
expect(actions).toContain('merge');
|
|
||||||
expect(actions).toContain('delete');
|
|
||||||
expect(actions).not.toContain('skip');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('showFullDiff', () => {
|
|
||||||
it('should not throw for non-existent project dir', () => {
|
|
||||||
// spawnSync will fail gracefully; showFullDiff catches errors
|
|
||||||
expect(() => showFullDiff('/non-existent-dir', 'main', 'some-branch')).not.toThrow();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not throw for non-existent branch', () => {
|
|
||||||
expect(() => showFullDiff('/tmp', 'main', 'non-existent-branch-xyz')).not.toThrow();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should warn when diff fails', () => {
|
|
||||||
const warnSpy = vi.spyOn(console, 'log').mockImplementation(() => {});
|
|
||||||
showFullDiff('/non-existent-dir', 'main', 'some-branch');
|
|
||||||
warnSpy.mockRestore();
|
|
||||||
// No assertion needed — the test verifies it doesn't throw
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('isBranchMerged', () => {
|
|
||||||
it('should return false for non-existent project dir', () => {
|
|
||||||
// git merge-base will fail on non-existent dir
|
|
||||||
const result = isBranchMerged('/non-existent-dir', 'some-branch');
|
|
||||||
expect(result).toBe(false);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return false for non-existent branch', () => {
|
|
||||||
const result = isBranchMerged('/tmp', 'non-existent-branch-xyz');
|
|
||||||
expect(result).toBe(false);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('TaskRunner.listFailedTasks', () => {
|
|
||||||
let tmpDir: string;
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'takt-test-'));
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
fs.rmSync(tmpDir, { recursive: true, force: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return empty array for empty failed directory', () => {
|
|
||||||
const runner = new TaskRunner(tmpDir);
|
const runner = new TaskRunner(tmpDir);
|
||||||
const result = runner.listFailedTasks();
|
|
||||||
expect(result).toEqual([]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should parse failed task directories correctly', () => {
|
const pending = runner.listPendingTaskItems();
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed');
|
const failed = runner.listFailedTasks();
|
||||||
const taskDir = path.join(failedDir, '2025-01-15T12-34-56_my-task');
|
|
||||||
fs.mkdirSync(taskDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(taskDir, 'my-task.md'), 'Fix the login bug\nMore details here');
|
|
||||||
|
|
||||||
const runner = new TaskRunner(tmpDir);
|
expect(pending).toHaveLength(1);
|
||||||
const result = runner.listFailedTasks();
|
expect(pending[0]?.name).toBe('pending-one');
|
||||||
|
expect(failed).toHaveLength(1);
|
||||||
expect(result).toHaveLength(1);
|
expect(failed[0]?.name).toBe('failed-one');
|
||||||
expect(result[0]).toEqual({
|
expect(failed[0]?.failure?.error).toBe('boom');
|
||||||
kind: 'failed',
|
|
||||||
name: 'my-task',
|
|
||||||
createdAt: '2025-01-15T12:34:56',
|
|
||||||
filePath: taskDir,
|
|
||||||
content: 'Fix the login bug',
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should skip malformed directory names', () => {
|
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed');
|
|
||||||
// No underscore → malformed, should be skipped
|
|
||||||
fs.mkdirSync(path.join(failedDir, 'malformed-name'), { recursive: true });
|
|
||||||
// Valid one
|
|
||||||
const validDir = path.join(failedDir, '2025-01-15T12-34-56_valid-task');
|
|
||||||
fs.mkdirSync(validDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(validDir, 'valid-task.md'), 'Content');
|
|
||||||
|
|
||||||
const runner = new TaskRunner(tmpDir);
|
|
||||||
const result = runner.listFailedTasks();
|
|
||||||
|
|
||||||
expect(result).toHaveLength(1);
|
|
||||||
expect(result[0]!.name).toBe('valid-task');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should extract task content from task file in directory', () => {
|
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed');
|
|
||||||
const taskDir = path.join(failedDir, '2025-02-01T00-00-00_content-test');
|
|
||||||
fs.mkdirSync(taskDir, { recursive: true });
|
|
||||||
// report.md and log.json should be skipped; the actual task file should be read
|
|
||||||
fs.writeFileSync(path.join(taskDir, 'report.md'), 'Report content');
|
|
||||||
fs.writeFileSync(path.join(taskDir, 'log.json'), '{}');
|
|
||||||
fs.writeFileSync(path.join(taskDir, 'content-test.yaml'), 'task: Do something important');
|
|
||||||
|
|
||||||
const runner = new TaskRunner(tmpDir);
|
|
||||||
const result = runner.listFailedTasks();
|
|
||||||
|
|
||||||
expect(result).toHaveLength(1);
|
|
||||||
expect(result[0]!.content).toBe('task: Do something important');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return empty content when no task file exists', () => {
|
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed');
|
|
||||||
const taskDir = path.join(failedDir, '2025-02-01T00-00-00_no-task-file');
|
|
||||||
fs.mkdirSync(taskDir, { recursive: true });
|
|
||||||
// Only report.md and log.json, no actual task file
|
|
||||||
fs.writeFileSync(path.join(taskDir, 'report.md'), 'Report content');
|
|
||||||
fs.writeFileSync(path.join(taskDir, 'log.json'), '{}');
|
|
||||||
|
|
||||||
const runner = new TaskRunner(tmpDir);
|
|
||||||
const result = runner.listFailedTasks();
|
|
||||||
|
|
||||||
expect(result).toHaveLength(1);
|
|
||||||
expect(result[0]!.content).toBe('');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should handle task name with underscores', () => {
|
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed');
|
|
||||||
const taskDir = path.join(failedDir, '2025-01-15T12-34-56_my_task_name');
|
|
||||||
fs.mkdirSync(taskDir, { recursive: true });
|
|
||||||
|
|
||||||
const runner = new TaskRunner(tmpDir);
|
|
||||||
const result = runner.listFailedTasks();
|
|
||||||
|
|
||||||
expect(result).toHaveLength(1);
|
|
||||||
expect(result[0]!.name).toBe('my_task_name');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should skip non-directory entries', () => {
|
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed');
|
|
||||||
fs.mkdirSync(failedDir, { recursive: true });
|
|
||||||
// Create a file (not a directory) in the failed dir
|
|
||||||
fs.writeFileSync(path.join(failedDir, '2025-01-15T12-34-56_file-task'), 'content');
|
|
||||||
|
|
||||||
const runner = new TaskRunner(tmpDir);
|
|
||||||
const result = runner.listFailedTasks();
|
|
||||||
|
|
||||||
expect(result).toHaveLength(0);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('TaskRunner.listPendingTaskItems', () => {
|
|
||||||
let tmpDir: string;
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'takt-test-'));
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
fs.rmSync(tmpDir, { recursive: true, force: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return empty array when no pending tasks', () => {
|
|
||||||
const runner = new TaskRunner(tmpDir);
|
|
||||||
const result = runner.listPendingTaskItems();
|
|
||||||
expect(result).toEqual([]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should convert TaskInfo to TaskListItem with kind=pending', () => {
|
|
||||||
const tasksDir = path.join(tmpDir, '.takt', 'tasks');
|
|
||||||
fs.mkdirSync(tasksDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(tasksDir, 'my-task.md'), 'Fix the login bug\nMore details here');
|
|
||||||
|
|
||||||
const runner = new TaskRunner(tmpDir);
|
|
||||||
const result = runner.listPendingTaskItems();
|
|
||||||
|
|
||||||
expect(result).toHaveLength(1);
|
|
||||||
expect(result[0]!.kind).toBe('pending');
|
|
||||||
expect(result[0]!.name).toBe('my-task');
|
|
||||||
expect(result[0]!.content).toBe('Fix the login bug');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should truncate content to first line (max 80 chars)', () => {
|
|
||||||
const tasksDir = path.join(tmpDir, '.takt', 'tasks');
|
|
||||||
fs.mkdirSync(tasksDir, { recursive: true });
|
|
||||||
const longLine = 'A'.repeat(120) + '\nSecond line';
|
|
||||||
fs.writeFileSync(path.join(tasksDir, 'long-task.md'), longLine);
|
|
||||||
|
|
||||||
const runner = new TaskRunner(tmpDir);
|
|
||||||
const result = runner.listPendingTaskItems();
|
|
||||||
|
|
||||||
expect(result).toHaveLength(1);
|
|
||||||
expect(result[0]!.content).toBe('A'.repeat(80));
|
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('listTasks non-interactive JSON output', () => {
|
describe('listTasks non-interactive JSON output', () => {
|
||||||
let tmpDir: string;
|
it('should output JSON object with branches, pendingTasks, and failedTasks', async () => {
|
||||||
|
writeTasksFile(tmpDir);
|
||||||
|
const logSpy = vi.spyOn(console, 'log').mockImplementation(() => undefined);
|
||||||
|
|
||||||
beforeEach(() => {
|
await listTasksNonInteractive(tmpDir, { enabled: true, format: 'json' });
|
||||||
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'takt-test-json-'));
|
|
||||||
// Initialize as a git repo so detectDefaultBranch works
|
|
||||||
execFileSync('git', ['init', '--initial-branch', 'main'], { cwd: tmpDir, stdio: 'pipe' });
|
|
||||||
execFileSync('git', ['config', 'user.name', 'Test User'], { cwd: tmpDir, stdio: 'pipe' });
|
|
||||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: tmpDir, stdio: 'pipe' });
|
|
||||||
execFileSync('git', ['commit', '--allow-empty', '-m', 'init'], { cwd: tmpDir, stdio: 'pipe' });
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
fs.rmSync(tmpDir, { recursive: true, force: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should output JSON as object with branches, pendingTasks, and failedTasks keys', async () => {
|
|
||||||
// Given: a pending task and a failed task
|
|
||||||
const tasksDir = path.join(tmpDir, '.takt', 'tasks');
|
|
||||||
fs.mkdirSync(tasksDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(tasksDir, 'my-task.md'), 'Do something');
|
|
||||||
|
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed', '2025-01-15T12-34-56_failed-task');
|
|
||||||
fs.mkdirSync(failedDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(failedDir, 'failed-task.md'), 'This failed');
|
|
||||||
|
|
||||||
const logSpy = vi.spyOn(console, 'log').mockImplementation(() => {});
|
|
||||||
|
|
||||||
// When: listTasks is called in non-interactive JSON mode
|
|
||||||
await listTasks(tmpDir, undefined, {
|
|
||||||
enabled: true,
|
|
||||||
format: 'json',
|
|
||||||
});
|
|
||||||
|
|
||||||
// Then: output is an object with branches, pendingTasks, failedTasks
|
|
||||||
expect(logSpy).toHaveBeenCalledTimes(1);
|
expect(logSpy).toHaveBeenCalledTimes(1);
|
||||||
const output = JSON.parse(logSpy.mock.calls[0]![0] as string);
|
const payload = JSON.parse(logSpy.mock.calls[0]![0] as string) as {
|
||||||
expect(output).toHaveProperty('branches');
|
branches: unknown[];
|
||||||
expect(output).toHaveProperty('pendingTasks');
|
pendingTasks: Array<{ name: string }>;
|
||||||
expect(output).toHaveProperty('failedTasks');
|
failedTasks: Array<{ name: string }>;
|
||||||
expect(Array.isArray(output.branches)).toBe(true);
|
};
|
||||||
expect(Array.isArray(output.pendingTasks)).toBe(true);
|
expect(Array.isArray(payload.branches)).toBe(true);
|
||||||
expect(Array.isArray(output.failedTasks)).toBe(true);
|
expect(payload.pendingTasks[0]?.name).toBe('pending-one');
|
||||||
expect(output.pendingTasks).toHaveLength(1);
|
expect(payload.failedTasks[0]?.name).toBe('failed-one');
|
||||||
expect(output.pendingTasks[0].name).toBe('my-task');
|
|
||||||
expect(output.failedTasks).toHaveLength(1);
|
|
||||||
expect(output.failedTasks[0].name).toBe('failed-task');
|
|
||||||
|
|
||||||
logSpy.mockRestore();
|
logSpy.mockRestore();
|
||||||
});
|
});
|
||||||
|
|||||||
109
src/__tests__/listTasksInteractivePendingLabel.test.ts
Normal file
109
src/__tests__/listTasksInteractivePendingLabel.test.ts
Normal file
@ -0,0 +1,109 @@
|
|||||||
|
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||||
|
import type { TaskListItem } from '../infra/task/types.js';
|
||||||
|
|
||||||
|
const {
|
||||||
|
mockSelectOption,
|
||||||
|
mockHeader,
|
||||||
|
mockInfo,
|
||||||
|
mockBlankLine,
|
||||||
|
mockConfirm,
|
||||||
|
mockListPendingTaskItems,
|
||||||
|
mockListFailedTasks,
|
||||||
|
mockDeletePendingTask,
|
||||||
|
} = vi.hoisted(() => ({
|
||||||
|
mockSelectOption: vi.fn(),
|
||||||
|
mockHeader: vi.fn(),
|
||||||
|
mockInfo: vi.fn(),
|
||||||
|
mockBlankLine: vi.fn(),
|
||||||
|
mockConfirm: vi.fn(),
|
||||||
|
mockListPendingTaskItems: vi.fn(),
|
||||||
|
mockListFailedTasks: vi.fn(),
|
||||||
|
mockDeletePendingTask: vi.fn(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../infra/task/index.js', () => ({
|
||||||
|
listTaktBranches: vi.fn(() => []),
|
||||||
|
buildListItems: vi.fn(() => []),
|
||||||
|
detectDefaultBranch: vi.fn(() => 'main'),
|
||||||
|
TaskRunner: class {
|
||||||
|
listPendingTaskItems() {
|
||||||
|
return mockListPendingTaskItems();
|
||||||
|
}
|
||||||
|
listFailedTasks() {
|
||||||
|
return mockListFailedTasks();
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../shared/prompt/index.js', () => ({
|
||||||
|
selectOption: mockSelectOption,
|
||||||
|
confirm: mockConfirm,
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../shared/ui/index.js', () => ({
|
||||||
|
info: mockInfo,
|
||||||
|
header: mockHeader,
|
||||||
|
blankLine: mockBlankLine,
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../features/tasks/list/taskActions.js', () => ({
|
||||||
|
showFullDiff: vi.fn(),
|
||||||
|
showDiffAndPromptAction: vi.fn(),
|
||||||
|
tryMergeBranch: vi.fn(),
|
||||||
|
mergeBranch: vi.fn(),
|
||||||
|
deleteBranch: vi.fn(),
|
||||||
|
instructBranch: vi.fn(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../features/tasks/list/taskDeleteActions.js', () => ({
|
||||||
|
deletePendingTask: mockDeletePendingTask,
|
||||||
|
deleteFailedTask: vi.fn(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../features/tasks/list/taskRetryActions.js', () => ({
|
||||||
|
retryFailedTask: vi.fn(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
import { listTasks } from '../features/tasks/list/index.js';
|
||||||
|
|
||||||
|
describe('listTasks interactive pending label regression', () => {
|
||||||
|
const pendingTask: TaskListItem = {
|
||||||
|
kind: 'pending',
|
||||||
|
name: 'my-task',
|
||||||
|
createdAt: '2026-02-09T00:00:00',
|
||||||
|
filePath: '/tmp/my-task.md',
|
||||||
|
content: 'Fix running status label',
|
||||||
|
};
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.clearAllMocks();
|
||||||
|
mockListPendingTaskItems.mockReturnValue([pendingTask]);
|
||||||
|
mockListFailedTasks.mockReturnValue([]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should show [running] in interactive menu for pending tasks', async () => {
|
||||||
|
mockSelectOption.mockResolvedValueOnce(null);
|
||||||
|
|
||||||
|
await listTasks('/project');
|
||||||
|
|
||||||
|
expect(mockSelectOption).toHaveBeenCalledTimes(1);
|
||||||
|
const menuOptions = mockSelectOption.mock.calls[0]![1] as Array<{ label: string; value: string }>;
|
||||||
|
expect(menuOptions).toContainEqual(expect.objectContaining({ label: '[running] my-task', value: 'pending:0' }));
|
||||||
|
expect(menuOptions.some((opt) => opt.label.includes('[pending]'))).toBe(false);
|
||||||
|
expect(menuOptions.some((opt) => opt.label.includes('[pendig]'))).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should show [running] header when pending task is selected', async () => {
|
||||||
|
mockSelectOption
|
||||||
|
.mockResolvedValueOnce('pending:0')
|
||||||
|
.mockResolvedValueOnce(null)
|
||||||
|
.mockResolvedValueOnce(null);
|
||||||
|
|
||||||
|
await listTasks('/project');
|
||||||
|
|
||||||
|
expect(mockHeader).toHaveBeenCalledWith('[running] my-task');
|
||||||
|
const headerTexts = mockHeader.mock.calls.map(([text]) => String(text));
|
||||||
|
expect(headerTexts.some((text) => text.includes('[pending]'))).toBe(false);
|
||||||
|
expect(headerTexts.some((text) => text.includes('[pendig]'))).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
120
src/__tests__/loop-detector.test.ts
Normal file
120
src/__tests__/loop-detector.test.ts
Normal file
@ -0,0 +1,120 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for LoopDetector
|
||||||
|
*
|
||||||
|
* Tests consecutive same-movement detection and configurable actions.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect, beforeEach } from 'vitest';
|
||||||
|
import { LoopDetector } from '../core/piece/engine/loop-detector.js';
|
||||||
|
|
||||||
|
describe('LoopDetector', () => {
|
||||||
|
describe('with default config', () => {
|
||||||
|
let detector: LoopDetector;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
detector = new LoopDetector();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not detect loop for different movements', () => {
|
||||||
|
const result1 = detector.check('step-a');
|
||||||
|
const result2 = detector.check('step-b');
|
||||||
|
const result3 = detector.check('step-a');
|
||||||
|
expect(result1.isLoop).toBe(false);
|
||||||
|
expect(result2.isLoop).toBe(false);
|
||||||
|
expect(result3.isLoop).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not detect loop below threshold (10 consecutive)', () => {
|
||||||
|
for (let i = 0; i < 10; i++) {
|
||||||
|
const result = detector.check('step-a');
|
||||||
|
expect(result.isLoop).toBe(false);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should detect loop at 11th consecutive execution (default threshold 10)', () => {
|
||||||
|
for (let i = 0; i < 10; i++) {
|
||||||
|
detector.check('step-a');
|
||||||
|
}
|
||||||
|
const result = detector.check('step-a');
|
||||||
|
expect(result.isLoop).toBe(true);
|
||||||
|
expect(result.count).toBe(11);
|
||||||
|
expect(result.shouldWarn).toBe(true);
|
||||||
|
expect(result.shouldAbort).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reset consecutive count when movement changes', () => {
|
||||||
|
for (let i = 0; i < 8; i++) {
|
||||||
|
detector.check('step-a');
|
||||||
|
}
|
||||||
|
detector.check('step-b');
|
||||||
|
const result = detector.check('step-a');
|
||||||
|
expect(result.isLoop).toBe(false);
|
||||||
|
expect(result.count).toBe(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should track consecutive count correctly', () => {
|
||||||
|
detector.check('step-a');
|
||||||
|
expect(detector.getConsecutiveCount()).toBe(1);
|
||||||
|
detector.check('step-a');
|
||||||
|
expect(detector.getConsecutiveCount()).toBe(2);
|
||||||
|
detector.check('step-b');
|
||||||
|
expect(detector.getConsecutiveCount()).toBe(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('with abort action', () => {
|
||||||
|
it('should set shouldAbort when action is abort', () => {
|
||||||
|
const detector = new LoopDetector({ maxConsecutiveSameStep: 3, action: 'abort' });
|
||||||
|
|
||||||
|
for (let i = 0; i < 3; i++) {
|
||||||
|
detector.check('step-a');
|
||||||
|
}
|
||||||
|
const result = detector.check('step-a');
|
||||||
|
expect(result.isLoop).toBe(true);
|
||||||
|
expect(result.shouldAbort).toBe(true);
|
||||||
|
expect(result.shouldWarn).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('with ignore action', () => {
|
||||||
|
it('should not warn or abort when action is ignore', () => {
|
||||||
|
const detector = new LoopDetector({ maxConsecutiveSameStep: 3, action: 'ignore' });
|
||||||
|
|
||||||
|
for (let i = 0; i < 3; i++) {
|
||||||
|
detector.check('step-a');
|
||||||
|
}
|
||||||
|
const result = detector.check('step-a');
|
||||||
|
expect(result.isLoop).toBe(true);
|
||||||
|
expect(result.shouldAbort).toBe(false);
|
||||||
|
expect(result.shouldWarn).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('with custom threshold', () => {
|
||||||
|
it('should detect loop at custom threshold + 1', () => {
|
||||||
|
const detector = new LoopDetector({ maxConsecutiveSameStep: 2 });
|
||||||
|
|
||||||
|
detector.check('step-a');
|
||||||
|
detector.check('step-a');
|
||||||
|
const result = detector.check('step-a');
|
||||||
|
expect(result.isLoop).toBe(true);
|
||||||
|
expect(result.count).toBe(3);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('reset', () => {
|
||||||
|
it('should clear all state', () => {
|
||||||
|
const detector = new LoopDetector({ maxConsecutiveSameStep: 2 });
|
||||||
|
|
||||||
|
detector.check('step-a');
|
||||||
|
detector.check('step-a');
|
||||||
|
detector.reset();
|
||||||
|
|
||||||
|
expect(detector.getConsecutiveCount()).toBe(0);
|
||||||
|
|
||||||
|
const result = detector.check('step-a');
|
||||||
|
expect(result.isLoop).toBe(false);
|
||||||
|
expect(result.count).toBe(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
@ -33,6 +33,7 @@ describe('StatusSchema', () => {
|
|||||||
expect(StatusSchema.parse('approved')).toBe('approved');
|
expect(StatusSchema.parse('approved')).toBe('approved');
|
||||||
expect(StatusSchema.parse('rejected')).toBe('rejected');
|
expect(StatusSchema.parse('rejected')).toBe('rejected');
|
||||||
expect(StatusSchema.parse('blocked')).toBe('blocked');
|
expect(StatusSchema.parse('blocked')).toBe('blocked');
|
||||||
|
expect(StatusSchema.parse('error')).toBe('error');
|
||||||
expect(StatusSchema.parse('answer')).toBe('answer');
|
expect(StatusSchema.parse('answer')).toBe('answer');
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
87
src/__tests__/naming.test.ts
Normal file
87
src/__tests__/naming.test.ts
Normal file
@ -0,0 +1,87 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for task naming utilities
|
||||||
|
*
|
||||||
|
* Tests nowIso, firstLine, and sanitizeTaskName functions.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect, vi, afterEach } from 'vitest';
|
||||||
|
import { nowIso, firstLine, sanitizeTaskName } from '../infra/task/naming.js';
|
||||||
|
|
||||||
|
describe('nowIso', () => {
|
||||||
|
afterEach(() => {
|
||||||
|
vi.restoreAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return a valid ISO 8601 string', () => {
|
||||||
|
const result = nowIso();
|
||||||
|
expect(() => new Date(result)).not.toThrow();
|
||||||
|
expect(result).toMatch(/^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}/);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return current time', () => {
|
||||||
|
vi.useFakeTimers();
|
||||||
|
vi.setSystemTime(new Date('2025-06-15T14:30:00.000Z'));
|
||||||
|
|
||||||
|
expect(nowIso()).toBe('2025-06-15T14:30:00.000Z');
|
||||||
|
|
||||||
|
vi.useRealTimers();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('firstLine', () => {
|
||||||
|
it('should return the first line of text', () => {
|
||||||
|
expect(firstLine('first line\nsecond line\nthird line')).toBe('first line');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should trim leading whitespace from content', () => {
|
||||||
|
expect(firstLine(' hello world\nsecond')).toBe('hello world');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should truncate to 80 characters', () => {
|
||||||
|
const longLine = 'a'.repeat(100);
|
||||||
|
expect(firstLine(longLine)).toBe('a'.repeat(80));
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle empty string', () => {
|
||||||
|
expect(firstLine('')).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle single line', () => {
|
||||||
|
expect(firstLine('just one line')).toBe('just one line');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle whitespace-only input', () => {
|
||||||
|
expect(firstLine(' \n ')).toBe('');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('sanitizeTaskName', () => {
|
||||||
|
it('should lowercase the input', () => {
|
||||||
|
expect(sanitizeTaskName('Hello World')).toBe('hello-world');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace special characters with spaces then hyphens', () => {
|
||||||
|
expect(sanitizeTaskName('task@name#123')).toBe('task-name-123');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should collapse multiple hyphens', () => {
|
||||||
|
expect(sanitizeTaskName('a---b')).toBe('a-b');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should trim leading/trailing whitespace', () => {
|
||||||
|
expect(sanitizeTaskName(' hello ')).toBe('hello');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle typical task names', () => {
|
||||||
|
expect(sanitizeTaskName('Fix: login bug (#42)')).toBe('fix-login-bug-42');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should generate fallback name for empty result', () => {
|
||||||
|
const result = sanitizeTaskName('!@#$%');
|
||||||
|
expect(result).toMatch(/^task-\d+$/);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve numbers and lowercase letters', () => {
|
||||||
|
expect(sanitizeTaskName('abc123def')).toBe('abc123def');
|
||||||
|
});
|
||||||
|
});
|
||||||
@ -3,14 +3,18 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
import { mkdirSync, rmSync, writeFileSync, existsSync, readFileSync } from 'node:fs';
|
import { mkdirSync, rmSync, writeFileSync } from 'node:fs';
|
||||||
import { join } from 'node:path';
|
import { join } from 'node:path';
|
||||||
import { tmpdir } from 'node:os';
|
import { tmpdir } from 'node:os';
|
||||||
import { randomUUID } from 'node:crypto';
|
import { randomUUID } from 'node:crypto';
|
||||||
import type { PieceWithSource } from '../infra/config/index.js';
|
import type { PieceWithSource } from '../infra/config/index.js';
|
||||||
|
|
||||||
|
const languageState = vi.hoisted(() => ({
|
||||||
|
value: 'en' as 'en' | 'ja',
|
||||||
|
}));
|
||||||
|
|
||||||
const pathsState = vi.hoisted(() => ({
|
const pathsState = vi.hoisted(() => ({
|
||||||
resourcesDir: '',
|
resourcesRoot: '',
|
||||||
userCategoriesPath: '',
|
userCategoriesPath: '',
|
||||||
}));
|
}));
|
||||||
|
|
||||||
@ -18,7 +22,9 @@ vi.mock('../infra/config/global/globalConfig.js', async (importOriginal) => {
|
|||||||
const original = await importOriginal() as Record<string, unknown>;
|
const original = await importOriginal() as Record<string, unknown>;
|
||||||
return {
|
return {
|
||||||
...original,
|
...original,
|
||||||
getLanguage: () => 'en',
|
getLanguage: () => languageState.value,
|
||||||
|
getBuiltinPiecesEnabled: () => true,
|
||||||
|
getDisabledBuiltins: () => [],
|
||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -26,13 +32,15 @@ vi.mock('../infra/resources/index.js', async (importOriginal) => {
|
|||||||
const original = await importOriginal() as Record<string, unknown>;
|
const original = await importOriginal() as Record<string, unknown>;
|
||||||
return {
|
return {
|
||||||
...original,
|
...original,
|
||||||
getLanguageResourcesDir: () => pathsState.resourcesDir,
|
getLanguageResourcesDir: (lang: string) => join(pathsState.resourcesRoot, lang),
|
||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
vi.mock('../infra/config/global/pieceCategories.js', async () => {
|
vi.mock('../infra/config/global/pieceCategories.js', async (importOriginal) => {
|
||||||
|
const original = await importOriginal() as Record<string, unknown>;
|
||||||
return {
|
return {
|
||||||
ensureUserCategoriesFile: () => pathsState.userCategoriesPath,
|
...original,
|
||||||
|
getPieceCategoriesPath: () => pathsState.userCategoriesPath,
|
||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -70,76 +78,21 @@ describe('piece category config loading', () => {
|
|||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
testDir = join(tmpdir(), `takt-cat-config-${randomUUID()}`);
|
testDir = join(tmpdir(), `takt-cat-config-${randomUUID()}`);
|
||||||
resourcesDir = join(testDir, 'resources');
|
resourcesDir = join(testDir, 'resources', 'en');
|
||||||
|
|
||||||
mkdirSync(resourcesDir, { recursive: true });
|
mkdirSync(resourcesDir, { recursive: true });
|
||||||
pathsState.resourcesDir = resourcesDir;
|
mkdirSync(join(testDir, 'resources', 'ja'), { recursive: true });
|
||||||
|
pathsState.resourcesRoot = join(testDir, 'resources');
|
||||||
|
languageState.value = 'en';
|
||||||
|
pathsState.userCategoriesPath = join(testDir, 'user-piece-categories.yaml');
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
rmSync(testDir, { recursive: true, force: true });
|
rmSync(testDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should load categories from user file (auto-copied from default)', () => {
|
it('should return null when builtin categories file is missing', () => {
|
||||||
const userPath = join(testDir, 'piece-categories.yaml');
|
|
||||||
writeYaml(userPath, `
|
|
||||||
piece_categories:
|
|
||||||
Default:
|
|
||||||
pieces:
|
|
||||||
- simple
|
|
||||||
show_others_category: true
|
|
||||||
others_category_name: "Others"
|
|
||||||
`);
|
|
||||||
pathsState.userCategoriesPath = userPath;
|
|
||||||
|
|
||||||
const config = getPieceCategories();
|
const config = getPieceCategories();
|
||||||
expect(config).not.toBeNull();
|
|
||||||
expect(config!.pieceCategories).toEqual([
|
|
||||||
{ name: 'Default', pieces: ['simple'], children: [] },
|
|
||||||
]);
|
|
||||||
expect(config!.showOthersCategory).toBe(true);
|
|
||||||
expect(config!.othersCategoryName).toBe('Others');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return null when user file has no piece_categories', () => {
|
|
||||||
const userPath = join(testDir, 'piece-categories.yaml');
|
|
||||||
writeYaml(userPath, `
|
|
||||||
show_others_category: true
|
|
||||||
`);
|
|
||||||
pathsState.userCategoriesPath = userPath;
|
|
||||||
|
|
||||||
const config = getPieceCategories();
|
|
||||||
expect(config).toBeNull();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should parse nested categories from user file', () => {
|
|
||||||
const userPath = join(testDir, 'piece-categories.yaml');
|
|
||||||
writeYaml(userPath, `
|
|
||||||
piece_categories:
|
|
||||||
Parent:
|
|
||||||
pieces:
|
|
||||||
- parent-piece
|
|
||||||
Child:
|
|
||||||
pieces:
|
|
||||||
- child-piece
|
|
||||||
`);
|
|
||||||
pathsState.userCategoriesPath = userPath;
|
|
||||||
|
|
||||||
const config = getPieceCategories();
|
|
||||||
expect(config).not.toBeNull();
|
|
||||||
expect(config!.pieceCategories).toEqual([
|
|
||||||
{
|
|
||||||
name: 'Parent',
|
|
||||||
pieces: ['parent-piece'],
|
|
||||||
children: [
|
|
||||||
{ name: 'Child', pieces: ['child-piece'], children: [] },
|
|
||||||
],
|
|
||||||
},
|
|
||||||
]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return null when default categories file is missing', () => {
|
|
||||||
const config = loadDefaultCategories();
|
|
||||||
expect(config).toBeNull();
|
expect(config).toBeNull();
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -156,19 +109,170 @@ piece_categories:
|
|||||||
expect(config!.pieceCategories).toEqual([
|
expect(config!.pieceCategories).toEqual([
|
||||||
{ name: 'Quick Start', pieces: ['default'], children: [] },
|
{ name: 'Quick Start', pieces: ['default'], children: [] },
|
||||||
]);
|
]);
|
||||||
|
expect(config!.builtinPieceCategories).toEqual([
|
||||||
|
{ name: 'Quick Start', pieces: ['default'], children: [] },
|
||||||
|
]);
|
||||||
|
expect(config!.userPieceCategories).toEqual([]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should use builtin categories when user overlay file is missing', () => {
|
||||||
|
writeYaml(join(resourcesDir, 'piece-categories.yaml'), `
|
||||||
|
piece_categories:
|
||||||
|
Main:
|
||||||
|
pieces:
|
||||||
|
- default
|
||||||
|
show_others_category: true
|
||||||
|
others_category_name: Others
|
||||||
|
`);
|
||||||
|
|
||||||
|
const config = getPieceCategories();
|
||||||
|
expect(config).not.toBeNull();
|
||||||
|
expect(config!.pieceCategories).toEqual([
|
||||||
|
{ name: 'Main', pieces: ['default'], children: [] },
|
||||||
|
]);
|
||||||
|
expect(config!.userPieceCategories).toEqual([]);
|
||||||
|
expect(config!.showOthersCategory).toBe(true);
|
||||||
|
expect(config!.othersCategoryName).toBe('Others');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should merge user overlay categories with builtin categories', () => {
|
||||||
|
writeYaml(join(resourcesDir, 'piece-categories.yaml'), `
|
||||||
|
piece_categories:
|
||||||
|
Main:
|
||||||
|
pieces:
|
||||||
|
- default
|
||||||
|
- coding
|
||||||
|
Child:
|
||||||
|
pieces:
|
||||||
|
- nested
|
||||||
|
Review:
|
||||||
|
pieces:
|
||||||
|
- review-only
|
||||||
|
- e2e-test
|
||||||
|
show_others_category: true
|
||||||
|
others_category_name: Others
|
||||||
|
`);
|
||||||
|
|
||||||
|
writeYaml(pathsState.userCategoriesPath, `
|
||||||
|
piece_categories:
|
||||||
|
Main:
|
||||||
|
pieces:
|
||||||
|
- custom
|
||||||
|
My Team:
|
||||||
|
pieces:
|
||||||
|
- team-flow
|
||||||
|
show_others_category: false
|
||||||
|
others_category_name: Unclassified
|
||||||
|
`);
|
||||||
|
|
||||||
|
const config = getPieceCategories();
|
||||||
|
expect(config).not.toBeNull();
|
||||||
|
expect(config!.pieceCategories).toEqual([
|
||||||
|
{
|
||||||
|
name: 'Main',
|
||||||
|
pieces: ['custom'],
|
||||||
|
children: [
|
||||||
|
{ name: 'Child', pieces: ['nested'], children: [] },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{ name: 'Review', pieces: ['review-only', 'e2e-test'], children: [] },
|
||||||
|
{ name: 'My Team', pieces: ['team-flow'], children: [] },
|
||||||
|
]);
|
||||||
|
expect(config!.builtinPieceCategories).toEqual([
|
||||||
|
{
|
||||||
|
name: 'Main',
|
||||||
|
pieces: ['default', 'coding'],
|
||||||
|
children: [
|
||||||
|
{ name: 'Child', pieces: ['nested'], children: [] },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{ name: 'Review', pieces: ['review-only', 'e2e-test'], children: [] },
|
||||||
|
]);
|
||||||
|
expect(config!.userPieceCategories).toEqual([
|
||||||
|
{ name: 'Main', pieces: ['custom'], children: [] },
|
||||||
|
{ name: 'My Team', pieces: ['team-flow'], children: [] },
|
||||||
|
]);
|
||||||
|
expect(config!.showOthersCategory).toBe(false);
|
||||||
|
expect(config!.othersCategoryName).toBe('Unclassified');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should load ja builtin categories and include e2e-test under レビュー', () => {
|
||||||
|
languageState.value = 'ja';
|
||||||
|
|
||||||
|
writeYaml(join(testDir, 'resources', 'ja', 'piece-categories.yaml'), `
|
||||||
|
piece_categories:
|
||||||
|
レビュー:
|
||||||
|
pieces:
|
||||||
|
- review-only
|
||||||
|
- e2e-test
|
||||||
|
`);
|
||||||
|
|
||||||
|
const config = getPieceCategories();
|
||||||
|
expect(config).not.toBeNull();
|
||||||
|
expect(config!.pieceCategories).toEqual([
|
||||||
|
{ name: 'レビュー', pieces: ['review-only', 'e2e-test'], children: [] },
|
||||||
|
]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should override others settings without replacing categories when user overlay has no piece_categories', () => {
|
||||||
|
writeYaml(join(resourcesDir, 'piece-categories.yaml'), `
|
||||||
|
piece_categories:
|
||||||
|
Main:
|
||||||
|
pieces:
|
||||||
|
- default
|
||||||
|
Review:
|
||||||
|
pieces:
|
||||||
|
- review-only
|
||||||
|
show_others_category: true
|
||||||
|
others_category_name: Others
|
||||||
|
`);
|
||||||
|
|
||||||
|
writeYaml(pathsState.userCategoriesPath, `
|
||||||
|
show_others_category: false
|
||||||
|
others_category_name: Unclassified
|
||||||
|
`);
|
||||||
|
|
||||||
|
const config = getPieceCategories();
|
||||||
|
expect(config).not.toBeNull();
|
||||||
|
expect(config!.pieceCategories).toEqual([
|
||||||
|
{ name: 'Main', pieces: ['default'], children: [] },
|
||||||
|
{ name: 'Review', pieces: ['review-only'], children: [] },
|
||||||
|
]);
|
||||||
|
expect(config!.builtinPieceCategories).toEqual([
|
||||||
|
{ name: 'Main', pieces: ['default'], children: [] },
|
||||||
|
{ name: 'Review', pieces: ['review-only'], children: [] },
|
||||||
|
]);
|
||||||
|
expect(config!.userPieceCategories).toEqual([]);
|
||||||
|
expect(config!.showOthersCategory).toBe(false);
|
||||||
|
expect(config!.othersCategoryName).toBe('Unclassified');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('buildCategorizedPieces', () => {
|
describe('buildCategorizedPieces', () => {
|
||||||
it('should place all pieces (user and builtin) into a unified category tree', () => {
|
it('should collect missing pieces with source information', () => {
|
||||||
const allPieces = createPieceMap([
|
const allPieces = createPieceMap([
|
||||||
{ name: 'a', source: 'user' },
|
{ name: 'custom', source: 'user' },
|
||||||
{ name: 'b', source: 'user' },
|
{ name: 'nested', source: 'builtin' },
|
||||||
{ name: 'c', source: 'builtin' },
|
{ name: 'team-flow', source: 'user' },
|
||||||
]);
|
]);
|
||||||
const config = {
|
const config = {
|
||||||
pieceCategories: [
|
pieceCategories: [
|
||||||
{ name: 'Cat', pieces: ['a', 'missing', 'c'], children: [] },
|
{
|
||||||
|
name: 'Main',
|
||||||
|
pieces: ['custom'],
|
||||||
|
children: [{ name: 'Child', pieces: ['nested'], children: [] }],
|
||||||
|
},
|
||||||
|
{ name: 'My Team', pieces: ['team-flow'], children: [] },
|
||||||
|
],
|
||||||
|
builtinPieceCategories: [
|
||||||
|
{
|
||||||
|
name: 'Main',
|
||||||
|
pieces: ['default'],
|
||||||
|
children: [{ name: 'Child', pieces: ['nested'], children: [] }],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
userPieceCategories: [
|
||||||
|
{ name: 'My Team', pieces: ['missing-user-piece'], children: [] },
|
||||||
],
|
],
|
||||||
showOthersCategory: true,
|
showOthersCategory: true,
|
||||||
othersCategoryName: 'Others',
|
othersCategoryName: 'Others',
|
||||||
@ -176,30 +280,19 @@ describe('buildCategorizedPieces', () => {
|
|||||||
|
|
||||||
const categorized = buildCategorizedPieces(allPieces, config);
|
const categorized = buildCategorizedPieces(allPieces, config);
|
||||||
expect(categorized.categories).toEqual([
|
expect(categorized.categories).toEqual([
|
||||||
{ name: 'Cat', pieces: ['a', 'c'], children: [] },
|
{
|
||||||
{ name: 'Others', pieces: ['b'], children: [] },
|
name: 'Main',
|
||||||
|
pieces: ['custom'],
|
||||||
|
children: [{ name: 'Child', pieces: ['nested'], children: [] }],
|
||||||
|
},
|
||||||
|
{ name: 'My Team', pieces: ['team-flow'], children: [] },
|
||||||
]);
|
]);
|
||||||
expect(categorized.missingPieces).toEqual([
|
expect(categorized.missingPieces).toEqual([
|
||||||
{ categoryPath: ['Cat'], pieceName: 'missing' },
|
{ categoryPath: ['Main'], pieceName: 'default', source: 'builtin' },
|
||||||
|
{ categoryPath: ['My Team'], pieceName: 'missing-user-piece', source: 'user' },
|
||||||
]);
|
]);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should skip empty categories', () => {
|
|
||||||
const allPieces = createPieceMap([
|
|
||||||
{ name: 'a', source: 'user' },
|
|
||||||
]);
|
|
||||||
const config = {
|
|
||||||
pieceCategories: [
|
|
||||||
{ name: 'Empty', pieces: [], children: [] },
|
|
||||||
],
|
|
||||||
showOthersCategory: false,
|
|
||||||
othersCategoryName: 'Others',
|
|
||||||
};
|
|
||||||
|
|
||||||
const categorized = buildCategorizedPieces(allPieces, config);
|
|
||||||
expect(categorized.categories).toEqual([]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should append Others category for uncategorized pieces', () => {
|
it('should append Others category for uncategorized pieces', () => {
|
||||||
const allPieces = createPieceMap([
|
const allPieces = createPieceMap([
|
||||||
{ name: 'default', source: 'builtin' },
|
{ name: 'default', source: 'builtin' },
|
||||||
@ -209,6 +302,10 @@ describe('buildCategorizedPieces', () => {
|
|||||||
pieceCategories: [
|
pieceCategories: [
|
||||||
{ name: 'Main', pieces: ['default'], children: [] },
|
{ name: 'Main', pieces: ['default'], children: [] },
|
||||||
],
|
],
|
||||||
|
builtinPieceCategories: [
|
||||||
|
{ name: 'Main', pieces: ['default'], children: [] },
|
||||||
|
],
|
||||||
|
userPieceCategories: [],
|
||||||
showOthersCategory: true,
|
showOthersCategory: true,
|
||||||
othersCategoryName: 'Others',
|
othersCategoryName: 'Others',
|
||||||
};
|
};
|
||||||
@ -220,28 +317,6 @@ describe('buildCategorizedPieces', () => {
|
|||||||
]);
|
]);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should merge uncategorized pieces into existing Others category', () => {
|
|
||||||
const allPieces = createPieceMap([
|
|
||||||
{ name: 'default', source: 'builtin' },
|
|
||||||
{ name: 'extra', source: 'builtin' },
|
|
||||||
{ name: 'user-piece', source: 'user' },
|
|
||||||
]);
|
|
||||||
const config = {
|
|
||||||
pieceCategories: [
|
|
||||||
{ name: 'Main', pieces: ['default'], children: [] },
|
|
||||||
{ name: 'Others', pieces: ['extra'], children: [] },
|
|
||||||
],
|
|
||||||
showOthersCategory: true,
|
|
||||||
othersCategoryName: 'Others',
|
|
||||||
};
|
|
||||||
|
|
||||||
const categorized = buildCategorizedPieces(allPieces, config);
|
|
||||||
expect(categorized.categories).toEqual([
|
|
||||||
{ name: 'Main', pieces: ['default'], children: [] },
|
|
||||||
{ name: 'Others', pieces: ['extra', 'user-piece'], children: [] },
|
|
||||||
]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not append Others when showOthersCategory is false', () => {
|
it('should not append Others when showOthersCategory is false', () => {
|
||||||
const allPieces = createPieceMap([
|
const allPieces = createPieceMap([
|
||||||
{ name: 'default', source: 'builtin' },
|
{ name: 'default', source: 'builtin' },
|
||||||
@ -251,6 +326,10 @@ describe('buildCategorizedPieces', () => {
|
|||||||
pieceCategories: [
|
pieceCategories: [
|
||||||
{ name: 'Main', pieces: ['default'], children: [] },
|
{ name: 'Main', pieces: ['default'], children: [] },
|
||||||
],
|
],
|
||||||
|
builtinPieceCategories: [
|
||||||
|
{ name: 'Main', pieces: ['default'], children: [] },
|
||||||
|
],
|
||||||
|
userPieceCategories: [],
|
||||||
showOthersCategory: false,
|
showOthersCategory: false,
|
||||||
othersCategoryName: 'Others',
|
othersCategoryName: 'Others',
|
||||||
};
|
};
|
||||||
@ -286,25 +365,3 @@ describe('buildCategorizedPieces', () => {
|
|||||||
expect(paths).toEqual(['Parent / Child']);
|
expect(paths).toEqual(['Parent / Child']);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('ensureUserCategoriesFile (integration)', () => {
|
|
||||||
let testDir: string;
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
testDir = join(tmpdir(), `takt-cat-ensure-${randomUUID()}`);
|
|
||||||
mkdirSync(testDir, { recursive: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
rmSync(testDir, { recursive: true, force: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should copy default categories to user path when missing', async () => {
|
|
||||||
// Use real ensureUserCategoriesFile (not mocked)
|
|
||||||
const { ensureUserCategoriesFile } = await import('../infra/config/global/pieceCategories.js');
|
|
||||||
|
|
||||||
// This test depends on the mock still being active — just verify the mock returns our path
|
|
||||||
const result = ensureUserCategoriesFile('/tmp/default.yaml');
|
|
||||||
expect(typeof result).toBe('string');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|||||||
70
src/__tests__/reportDir.test.ts
Normal file
70
src/__tests__/reportDir.test.ts
Normal file
@ -0,0 +1,70 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for report directory name generation
|
||||||
|
*
|
||||||
|
* Tests timestamp formatting and task summary slugification.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect, vi, afterEach } from 'vitest';
|
||||||
|
import { generateReportDir } from '../shared/utils/reportDir.js';
|
||||||
|
|
||||||
|
describe('generateReportDir', () => {
|
||||||
|
afterEach(() => {
|
||||||
|
vi.restoreAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should generate directory name with timestamp and task summary', () => {
|
||||||
|
vi.useFakeTimers();
|
||||||
|
vi.setSystemTime(new Date('2025-01-15T10:30:45.000Z'));
|
||||||
|
|
||||||
|
const result = generateReportDir('Add login feature');
|
||||||
|
expect(result).toBe('20250115-103045-add-login-feature');
|
||||||
|
|
||||||
|
vi.useRealTimers();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should truncate long task descriptions to 30 characters', () => {
|
||||||
|
vi.useFakeTimers();
|
||||||
|
vi.setSystemTime(new Date('2025-01-01T00:00:00.000Z'));
|
||||||
|
|
||||||
|
const longTask = 'This is a very long task description that should be truncated';
|
||||||
|
const result = generateReportDir(longTask);
|
||||||
|
// Timestamp is fixed, summary is truncated from first 30 chars
|
||||||
|
expect(result).toMatch(/^20250101-000000-/);
|
||||||
|
// The slug part should be derived from the first 30 chars
|
||||||
|
const slug = result.replace(/^20250101-000000-/, '');
|
||||||
|
expect(slug.length).toBeLessThanOrEqual(30);
|
||||||
|
|
||||||
|
vi.useRealTimers();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve Japanese characters in summary', () => {
|
||||||
|
vi.useFakeTimers();
|
||||||
|
vi.setSystemTime(new Date('2025-06-01T12:00:00.000Z'));
|
||||||
|
|
||||||
|
const result = generateReportDir('タスク指示書の実装');
|
||||||
|
expect(result).toContain('タスク指示書の実装');
|
||||||
|
|
||||||
|
vi.useRealTimers();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace special characters with hyphens', () => {
|
||||||
|
vi.useFakeTimers();
|
||||||
|
vi.setSystemTime(new Date('2025-01-01T00:00:00.000Z'));
|
||||||
|
|
||||||
|
const result = generateReportDir('Fix: bug (#42)');
|
||||||
|
const slug = result.replace(/^20250101-000000-/, '');
|
||||||
|
expect(slug).not.toMatch(/[^a-z0-9\u3040-\u309f\u30a0-\u30ff\u4e00-\u9faf-]/);
|
||||||
|
|
||||||
|
vi.useRealTimers();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should default to "task" when summary is empty after cleanup', () => {
|
||||||
|
vi.useFakeTimers();
|
||||||
|
vi.setSystemTime(new Date('2025-01-01T00:00:00.000Z'));
|
||||||
|
|
||||||
|
const result = generateReportDir('!@#$%^&*()');
|
||||||
|
expect(result).toBe('20250101-000000-task');
|
||||||
|
|
||||||
|
vi.useRealTimers();
|
||||||
|
});
|
||||||
|
});
|
||||||
44
src/__tests__/resetCategories.test.ts
Normal file
44
src/__tests__/resetCategories.test.ts
Normal file
@ -0,0 +1,44 @@
|
|||||||
|
/**
|
||||||
|
* Tests for reset categories command behavior.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||||
|
|
||||||
|
vi.mock('../infra/config/global/pieceCategories.js', () => ({
|
||||||
|
resetPieceCategories: vi.fn(),
|
||||||
|
getPieceCategoriesPath: vi.fn(() => '/tmp/user-piece-categories.yaml'),
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../shared/ui/index.js', () => ({
|
||||||
|
header: vi.fn(),
|
||||||
|
success: vi.fn(),
|
||||||
|
info: vi.fn(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
import { resetPieceCategories } from '../infra/config/global/pieceCategories.js';
|
||||||
|
import { header, success, info } from '../shared/ui/index.js';
|
||||||
|
import { resetCategoriesToDefault } from '../features/config/resetCategories.js';
|
||||||
|
|
||||||
|
const mockResetPieceCategories = vi.mocked(resetPieceCategories);
|
||||||
|
const mockHeader = vi.mocked(header);
|
||||||
|
const mockSuccess = vi.mocked(success);
|
||||||
|
const mockInfo = vi.mocked(info);
|
||||||
|
|
||||||
|
describe('resetCategoriesToDefault', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.clearAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reset user category overlay and show updated message', async () => {
|
||||||
|
// Given
|
||||||
|
|
||||||
|
// When
|
||||||
|
await resetCategoriesToDefault();
|
||||||
|
|
||||||
|
// Then
|
||||||
|
expect(mockHeader).toHaveBeenCalledWith('Reset Categories');
|
||||||
|
expect(mockResetPieceCategories).toHaveBeenCalledTimes(1);
|
||||||
|
expect(mockSuccess).toHaveBeenCalledWith('User category overlay reset.');
|
||||||
|
expect(mockInfo).toHaveBeenCalledWith(' /tmp/user-piece-categories.yaml');
|
||||||
|
});
|
||||||
|
});
|
||||||
229
src/__tests__/rule-evaluator.test.ts
Normal file
229
src/__tests__/rule-evaluator.test.ts
Normal file
@ -0,0 +1,229 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for RuleEvaluator
|
||||||
|
*
|
||||||
|
* Tests the evaluation pipeline: aggregate → tag detection → ai() → ai judge fallback.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect, vi } from 'vitest';
|
||||||
|
import { RuleEvaluator, type RuleEvaluatorContext } from '../core/piece/evaluation/RuleEvaluator.js';
|
||||||
|
import type { PieceMovement, PieceState } from '../core/models/types.js';
|
||||||
|
|
||||||
|
function makeMovement(overrides: Partial<PieceMovement> = {}): PieceMovement {
|
||||||
|
return {
|
||||||
|
name: 'test-movement',
|
||||||
|
personaDisplayName: 'tester',
|
||||||
|
instructionTemplate: '',
|
||||||
|
passPreviousResponse: false,
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeState(): PieceState {
|
||||||
|
return {
|
||||||
|
pieceName: 'test',
|
||||||
|
currentMovement: 'test-movement',
|
||||||
|
iteration: 1,
|
||||||
|
movementOutputs: new Map(),
|
||||||
|
userInputs: [],
|
||||||
|
personaSessions: new Map(),
|
||||||
|
movementIterations: new Map(),
|
||||||
|
status: 'running',
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeContext(overrides: Partial<RuleEvaluatorContext> = {}): RuleEvaluatorContext {
|
||||||
|
return {
|
||||||
|
state: makeState(),
|
||||||
|
cwd: '/tmp/test',
|
||||||
|
detectRuleIndex: vi.fn().mockReturnValue(-1),
|
||||||
|
callAiJudge: vi.fn().mockResolvedValue(-1),
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('RuleEvaluator', () => {
|
||||||
|
describe('evaluate', () => {
|
||||||
|
it('should return undefined when movement has no rules', async () => {
|
||||||
|
const step = makeMovement({ rules: undefined });
|
||||||
|
const ctx = makeContext();
|
||||||
|
const evaluator = new RuleEvaluator(step, ctx);
|
||||||
|
|
||||||
|
const result = await evaluator.evaluate('agent output', 'tag output');
|
||||||
|
expect(result).toBeUndefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return undefined when rules array is empty', async () => {
|
||||||
|
const step = makeMovement({ rules: [] });
|
||||||
|
const ctx = makeContext();
|
||||||
|
const evaluator = new RuleEvaluator(step, ctx);
|
||||||
|
|
||||||
|
const result = await evaluator.evaluate('agent output', 'tag output');
|
||||||
|
expect(result).toBeUndefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should detect rule via Phase 3 tag output', async () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [
|
||||||
|
{ condition: 'approved', next: 'implement' },
|
||||||
|
{ condition: 'rejected', next: 'review' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
const detectRuleIndex = vi.fn().mockReturnValue(0);
|
||||||
|
const ctx = makeContext({ detectRuleIndex });
|
||||||
|
const evaluator = new RuleEvaluator(step, ctx);
|
||||||
|
|
||||||
|
const result = await evaluator.evaluate('agent content', 'tag content with [TEST-MOVEMENT:1]');
|
||||||
|
expect(result).toEqual({ index: 0, method: 'phase3_tag' });
|
||||||
|
expect(detectRuleIndex).toHaveBeenCalledWith('tag content with [TEST-MOVEMENT:1]', 'test-movement');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should fallback to Phase 1 tag when Phase 3 tag not found', async () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [
|
||||||
|
{ condition: 'approved', next: 'implement' },
|
||||||
|
{ condition: 'rejected', next: 'review' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
// Phase 3 tagContent is non-empty but detectRuleIndex returns -1 (no match)
|
||||||
|
// Phase 1 agentContent check: detectRuleIndex returns 1
|
||||||
|
const detectRuleIndex = vi.fn()
|
||||||
|
.mockReturnValueOnce(-1) // Phase 3 tag not found
|
||||||
|
.mockReturnValueOnce(1); // Phase 1 tag found
|
||||||
|
const ctx = makeContext({ detectRuleIndex });
|
||||||
|
const evaluator = new RuleEvaluator(step, ctx);
|
||||||
|
|
||||||
|
const result = await evaluator.evaluate('agent content', 'phase3 content');
|
||||||
|
expect(result).toEqual({ index: 1, method: 'phase1_tag' });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should skip interactiveOnly rules in non-interactive mode', async () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [
|
||||||
|
{ condition: 'user-fix', next: 'fix', interactiveOnly: true },
|
||||||
|
{ condition: 'auto-fix', next: 'autofix' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
// Tag detection returns index 0 (interactiveOnly rule)
|
||||||
|
const detectRuleIndex = vi.fn().mockReturnValue(0);
|
||||||
|
const callAiJudge = vi.fn().mockResolvedValue(-1);
|
||||||
|
const ctx = makeContext({ detectRuleIndex, callAiJudge, interactive: false });
|
||||||
|
const evaluator = new RuleEvaluator(step, ctx);
|
||||||
|
|
||||||
|
// Should skip interactive-only rule and eventually throw
|
||||||
|
await expect(evaluator.evaluate('content', 'tag')).rejects.toThrow('no rule matched');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should allow interactiveOnly rules in interactive mode', async () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [
|
||||||
|
{ condition: 'user-fix', next: 'fix', interactiveOnly: true },
|
||||||
|
{ condition: 'auto-fix', next: 'autofix' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
const detectRuleIndex = vi.fn().mockReturnValue(0);
|
||||||
|
const ctx = makeContext({ detectRuleIndex, interactive: true });
|
||||||
|
const evaluator = new RuleEvaluator(step, ctx);
|
||||||
|
|
||||||
|
const result = await evaluator.evaluate('content', 'tag');
|
||||||
|
expect(result).toEqual({ index: 0, method: 'phase3_tag' });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should evaluate ai() conditions via AI judge', async () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [
|
||||||
|
{ condition: 'approved', next: 'implement', isAiCondition: true, aiConditionText: 'is it approved?' },
|
||||||
|
{ condition: 'rejected', next: 'review', isAiCondition: true, aiConditionText: 'is it rejected?' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
// callAiJudge returns 0 (first ai condition matched)
|
||||||
|
const callAiJudge = vi.fn().mockResolvedValue(0);
|
||||||
|
const ctx = makeContext({ callAiJudge });
|
||||||
|
const evaluator = new RuleEvaluator(step, ctx);
|
||||||
|
|
||||||
|
const result = await evaluator.evaluate('agent output', '');
|
||||||
|
expect(result).toEqual({ index: 0, method: 'ai_judge' });
|
||||||
|
expect(callAiJudge).toHaveBeenCalledWith(
|
||||||
|
'agent output',
|
||||||
|
[
|
||||||
|
{ index: 0, text: 'is it approved?' },
|
||||||
|
{ index: 1, text: 'is it rejected?' },
|
||||||
|
],
|
||||||
|
{ cwd: '/tmp/test' },
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should use ai_judge_fallback when no other method matches', async () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [
|
||||||
|
{ condition: 'approved', next: 'implement' },
|
||||||
|
{ condition: 'rejected', next: 'review' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
// No rules have isAiCondition, so evaluateAiConditions returns -1 without calling callAiJudge.
|
||||||
|
// evaluateAllConditionsViaAiJudge is the only caller of callAiJudge.
|
||||||
|
const callAiJudge = vi.fn().mockResolvedValue(1);
|
||||||
|
const ctx = makeContext({ callAiJudge });
|
||||||
|
const evaluator = new RuleEvaluator(step, ctx);
|
||||||
|
|
||||||
|
const result = await evaluator.evaluate('agent output', '');
|
||||||
|
expect(result).toEqual({ index: 1, method: 'ai_judge_fallback' });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should throw when no rule matches after all detection phases', async () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [
|
||||||
|
{ condition: 'approved', next: 'implement' },
|
||||||
|
{ condition: 'rejected', next: 'review' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
const ctx = makeContext();
|
||||||
|
const evaluator = new RuleEvaluator(step, ctx);
|
||||||
|
|
||||||
|
await expect(evaluator.evaluate('', '')).rejects.toThrow(
|
||||||
|
'Status not found for movement "test-movement": no rule matched after all detection phases',
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject out-of-bounds tag detection index', async () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [
|
||||||
|
{ condition: 'approved', next: 'implement' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
// Tag detection returns index 5 (out of bounds)
|
||||||
|
const detectRuleIndex = vi.fn().mockReturnValue(5);
|
||||||
|
const callAiJudge = vi.fn().mockResolvedValue(-1);
|
||||||
|
const ctx = makeContext({ detectRuleIndex, callAiJudge });
|
||||||
|
const evaluator = new RuleEvaluator(step, ctx);
|
||||||
|
|
||||||
|
await expect(evaluator.evaluate('content', 'tag')).rejects.toThrow('no rule matched');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should skip ai() conditions for interactiveOnly rules in non-interactive mode', async () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [
|
||||||
|
{
|
||||||
|
condition: 'user confirms',
|
||||||
|
next: 'fix',
|
||||||
|
interactiveOnly: true,
|
||||||
|
isAiCondition: true,
|
||||||
|
aiConditionText: 'did the user confirm?',
|
||||||
|
},
|
||||||
|
{ condition: 'auto proceed', next: 'COMPLETE' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
// In non-interactive mode, interactiveOnly rules are filtered out from ai judge.
|
||||||
|
// evaluateAiConditions skips the interactiveOnly ai() rule, returning -1.
|
||||||
|
// evaluateAllConditionsViaAiJudge filters to only non-interactive rules,
|
||||||
|
// passing conditions=[{index: 1, text: 'auto proceed'}] to judge.
|
||||||
|
// The judge returns 0 (first condition in filtered array).
|
||||||
|
const callAiJudge = vi.fn().mockResolvedValue(0);
|
||||||
|
const ctx = makeContext({ callAiJudge, interactive: false });
|
||||||
|
const evaluator = new RuleEvaluator(step, ctx);
|
||||||
|
|
||||||
|
const result = await evaluator.evaluate('output', '');
|
||||||
|
// Returns the judge result index (0) directly — it's the index into the filtered conditions array
|
||||||
|
expect(result).toEqual({ index: 0, method: 'ai_judge_fallback' });
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
164
src/__tests__/rule-utils.test.ts
Normal file
164
src/__tests__/rule-utils.test.ts
Normal file
@ -0,0 +1,164 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for rule-utils
|
||||||
|
*
|
||||||
|
* Tests tag-based rule detection, single-branch auto-selection,
|
||||||
|
* and report file extraction from output contracts.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import {
|
||||||
|
hasTagBasedRules,
|
||||||
|
hasOnlyOneBranch,
|
||||||
|
getAutoSelectedTag,
|
||||||
|
getReportFiles,
|
||||||
|
} from '../core/piece/evaluation/rule-utils.js';
|
||||||
|
import type { PieceMovement, OutputContractEntry } from '../core/models/types.js';
|
||||||
|
|
||||||
|
function makeMovement(overrides: Partial<PieceMovement> = {}): PieceMovement {
|
||||||
|
return {
|
||||||
|
name: 'test-movement',
|
||||||
|
personaDisplayName: 'tester',
|
||||||
|
instructionTemplate: '',
|
||||||
|
passPreviousResponse: false,
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('hasTagBasedRules', () => {
|
||||||
|
it('should return false when movement has no rules', () => {
|
||||||
|
const step = makeMovement({ rules: undefined });
|
||||||
|
expect(hasTagBasedRules(step)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when rules array is empty', () => {
|
||||||
|
const step = makeMovement({ rules: [] });
|
||||||
|
expect(hasTagBasedRules(step)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true when rules contain tag-based conditions', () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [
|
||||||
|
{ condition: 'approved' },
|
||||||
|
{ condition: 'rejected' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
expect(hasTagBasedRules(step)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when all rules are ai() conditions', () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [
|
||||||
|
{ condition: 'approved', isAiCondition: true, aiConditionText: 'is it approved?' },
|
||||||
|
{ condition: 'rejected', isAiCondition: true, aiConditionText: 'is it rejected?' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
expect(hasTagBasedRules(step)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when all rules are aggregate conditions', () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [
|
||||||
|
{ condition: 'all approved', isAggregateCondition: true, aggregateType: 'all', aggregateConditionText: 'approved' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
expect(hasTagBasedRules(step)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true when mixed rules include tag-based ones', () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [
|
||||||
|
{ condition: 'approved', isAiCondition: true, aiConditionText: 'approved?' },
|
||||||
|
{ condition: 'manual check' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
expect(hasTagBasedRules(step)).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('hasOnlyOneBranch', () => {
|
||||||
|
it('should return false when rules is undefined', () => {
|
||||||
|
const step = makeMovement({ rules: undefined });
|
||||||
|
expect(hasOnlyOneBranch(step)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when rules array is empty', () => {
|
||||||
|
const step = makeMovement({ rules: [] });
|
||||||
|
expect(hasOnlyOneBranch(step)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true when exactly one rule exists', () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [{ condition: 'done', next: 'COMPLETE' }],
|
||||||
|
});
|
||||||
|
expect(hasOnlyOneBranch(step)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when multiple rules exist', () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [
|
||||||
|
{ condition: 'approved', next: 'implement' },
|
||||||
|
{ condition: 'rejected', next: 'review' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
expect(hasOnlyOneBranch(step)).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('getAutoSelectedTag', () => {
|
||||||
|
it('should return uppercase tag for single-branch movement', () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
name: 'ai-review',
|
||||||
|
rules: [{ condition: 'done', next: 'COMPLETE' }],
|
||||||
|
});
|
||||||
|
expect(getAutoSelectedTag(step)).toBe('[AI-REVIEW:1]');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should throw when multiple branches exist', () => {
|
||||||
|
const step = makeMovement({
|
||||||
|
rules: [
|
||||||
|
{ condition: 'approved', next: 'implement' },
|
||||||
|
{ condition: 'rejected', next: 'review' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
expect(() => getAutoSelectedTag(step)).toThrow('Cannot auto-select tag when multiple branches exist');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should throw when no rules exist', () => {
|
||||||
|
const step = makeMovement({ rules: undefined });
|
||||||
|
expect(() => getAutoSelectedTag(step)).toThrow('Cannot auto-select tag when multiple branches exist');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('getReportFiles', () => {
|
||||||
|
it('should return empty array when outputContracts is undefined', () => {
|
||||||
|
expect(getReportFiles(undefined)).toEqual([]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return empty array when outputContracts is empty', () => {
|
||||||
|
expect(getReportFiles([])).toEqual([]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should extract name from OutputContractItem entries', () => {
|
||||||
|
const contracts: OutputContractEntry[] = [
|
||||||
|
{ name: '00-plan.md' },
|
||||||
|
{ name: '01-review.md' },
|
||||||
|
];
|
||||||
|
expect(getReportFiles(contracts)).toEqual(['00-plan.md', '01-review.md']);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should extract path from OutputContractLabelPath entries', () => {
|
||||||
|
const contracts: OutputContractEntry[] = [
|
||||||
|
{ label: 'Scope', path: 'scope.md' },
|
||||||
|
{ label: 'Decisions', path: 'decisions.md' },
|
||||||
|
];
|
||||||
|
expect(getReportFiles(contracts)).toEqual(['scope.md', 'decisions.md']);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle mixed entry types', () => {
|
||||||
|
const contracts: OutputContractEntry[] = [
|
||||||
|
{ name: '00-plan.md' },
|
||||||
|
{ label: 'Review', path: 'review.md' },
|
||||||
|
];
|
||||||
|
expect(getReportFiles(contracts)).toEqual(['00-plan.md', 'review.md']);
|
||||||
|
});
|
||||||
|
});
|
||||||
@ -21,18 +21,18 @@ vi.mock('../infra/config/index.js', () => ({
|
|||||||
import { loadGlobalConfig } from '../infra/config/index.js';
|
import { loadGlobalConfig } from '../infra/config/index.js';
|
||||||
const mockLoadGlobalConfig = vi.mocked(loadGlobalConfig);
|
const mockLoadGlobalConfig = vi.mocked(loadGlobalConfig);
|
||||||
|
|
||||||
const mockGetNextTask = vi.fn();
|
|
||||||
const mockClaimNextTasks = vi.fn();
|
const mockClaimNextTasks = vi.fn();
|
||||||
const mockCompleteTask = vi.fn();
|
const mockCompleteTask = vi.fn();
|
||||||
const mockFailTask = vi.fn();
|
const mockFailTask = vi.fn();
|
||||||
|
const mockRecoverInterruptedRunningTasks = vi.fn();
|
||||||
|
|
||||||
vi.mock('../infra/task/index.js', async (importOriginal) => ({
|
vi.mock('../infra/task/index.js', async (importOriginal) => ({
|
||||||
...(await importOriginal<Record<string, unknown>>()),
|
...(await importOriginal<Record<string, unknown>>()),
|
||||||
TaskRunner: vi.fn().mockImplementation(() => ({
|
TaskRunner: vi.fn().mockImplementation(() => ({
|
||||||
getNextTask: mockGetNextTask,
|
|
||||||
claimNextTasks: mockClaimNextTasks,
|
claimNextTasks: mockClaimNextTasks,
|
||||||
completeTask: mockCompleteTask,
|
completeTask: mockCompleteTask,
|
||||||
failTask: mockFailTask,
|
failTask: mockFailTask,
|
||||||
|
recoverInterruptedRunningTasks: mockRecoverInterruptedRunningTasks,
|
||||||
})),
|
})),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
@ -128,11 +128,15 @@ function createTask(name: string): TaskInfo {
|
|||||||
name,
|
name,
|
||||||
content: `Task: ${name}`,
|
content: `Task: ${name}`,
|
||||||
filePath: `/tasks/${name}.yaml`,
|
filePath: `/tasks/${name}.yaml`,
|
||||||
|
createdAt: '2026-02-09T00:00:00.000Z',
|
||||||
|
status: 'pending',
|
||||||
|
data: null,
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.clearAllMocks();
|
vi.clearAllMocks();
|
||||||
|
mockRecoverInterruptedRunningTasks.mockReturnValue(0);
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('runAllTasks concurrency', () => {
|
describe('runAllTasks concurrency', () => {
|
||||||
@ -155,7 +159,7 @@ describe('runAllTasks concurrency', () => {
|
|||||||
await runAllTasks('/project');
|
await runAllTasks('/project');
|
||||||
|
|
||||||
// Then
|
// Then
|
||||||
expect(mockInfo).toHaveBeenCalledWith('No pending tasks in .takt/tasks/');
|
expect(mockInfo).toHaveBeenCalledWith('No pending tasks in .takt/tasks.yaml');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should execute tasks sequentially via worker pool when concurrency is 1', async () => {
|
it('should execute tasks sequentially via worker pool when concurrency is 1', async () => {
|
||||||
@ -401,6 +405,28 @@ describe('runAllTasks concurrency', () => {
|
|||||||
expect(mockStatus).toHaveBeenCalledWith('Failed', '1', 'red');
|
expect(mockStatus).toHaveBeenCalledWith('Failed', '1', 'red');
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should persist failure reason and movement when piece aborts', async () => {
|
||||||
|
const task1 = createTask('fail-with-detail');
|
||||||
|
|
||||||
|
mockExecutePiece.mockResolvedValue({
|
||||||
|
success: false,
|
||||||
|
reason: 'blocked_by_review',
|
||||||
|
lastMovement: 'review',
|
||||||
|
lastMessage: 'security check failed',
|
||||||
|
});
|
||||||
|
mockClaimNextTasks
|
||||||
|
.mockReturnValueOnce([task1])
|
||||||
|
.mockReturnValueOnce([]);
|
||||||
|
|
||||||
|
await runAllTasks('/project');
|
||||||
|
|
||||||
|
expect(mockFailTask).toHaveBeenCalledWith(expect.objectContaining({
|
||||||
|
response: 'blocked_by_review',
|
||||||
|
failureMovement: 'review',
|
||||||
|
failureLastMessage: 'security check failed',
|
||||||
|
}));
|
||||||
|
});
|
||||||
|
|
||||||
it('should pass abortSignal and taskPrefix to executePiece in parallel mode', async () => {
|
it('should pass abortSignal and taskPrefix to executePiece in parallel mode', async () => {
|
||||||
// Given: One task in parallel mode
|
// Given: One task in parallel mode
|
||||||
const task1 = createTask('parallel-task');
|
const task1 = createTask('parallel-task');
|
||||||
@ -423,7 +449,7 @@ describe('runAllTasks concurrency', () => {
|
|||||||
expect(pieceOptions).toHaveProperty('taskPrefix', 'parallel-task');
|
expect(pieceOptions).toHaveProperty('taskPrefix', 'parallel-task');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should not pass abortSignal or taskPrefix in sequential mode', async () => {
|
it('should pass abortSignal but not taskPrefix in sequential mode', async () => {
|
||||||
// Given: Sequential mode
|
// Given: Sequential mode
|
||||||
mockLoadGlobalConfig.mockReturnValue({
|
mockLoadGlobalConfig.mockReturnValue({
|
||||||
language: 'en',
|
language: 'en',
|
||||||
@ -444,11 +470,11 @@ describe('runAllTasks concurrency', () => {
|
|||||||
// When
|
// When
|
||||||
await runAllTasks('/project');
|
await runAllTasks('/project');
|
||||||
|
|
||||||
// Then: executePiece should not have abortSignal or taskPrefix
|
// Then: executePiece should have abortSignal but not taskPrefix
|
||||||
expect(mockExecutePiece).toHaveBeenCalledTimes(1);
|
expect(mockExecutePiece).toHaveBeenCalledTimes(1);
|
||||||
const callArgs = mockExecutePiece.mock.calls[0];
|
const callArgs = mockExecutePiece.mock.calls[0];
|
||||||
const pieceOptions = callArgs?.[3];
|
const pieceOptions = callArgs?.[3];
|
||||||
expect(pieceOptions?.abortSignal).toBeUndefined();
|
expect(pieceOptions?.abortSignal).toBeInstanceOf(AbortSignal);
|
||||||
expect(pieceOptions?.taskPrefix).toBeUndefined();
|
expect(pieceOptions?.taskPrefix).toBeUndefined();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@ -1,15 +1,8 @@
|
|||||||
/**
|
|
||||||
* Tests for saveTaskFile and saveTaskFromInteractive
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||||
import * as fs from 'node:fs';
|
import * as fs from 'node:fs';
|
||||||
import * as path from 'node:path';
|
import * as path from 'node:path';
|
||||||
import { tmpdir } from 'node:os';
|
import { tmpdir } from 'node:os';
|
||||||
|
import { parse as parseYaml } from 'yaml';
|
||||||
vi.mock('../infra/task/summarize.js', () => ({
|
|
||||||
summarizeTaskName: vi.fn(),
|
|
||||||
}));
|
|
||||||
|
|
||||||
vi.mock('../shared/ui/index.js', () => ({
|
vi.mock('../shared/ui/index.js', () => ({
|
||||||
success: vi.fn(),
|
success: vi.fn(),
|
||||||
@ -31,12 +24,10 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
|||||||
}),
|
}),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
import { summarizeTaskName } from '../infra/task/summarize.js';
|
|
||||||
import { success, info } from '../shared/ui/index.js';
|
import { success, info } from '../shared/ui/index.js';
|
||||||
import { confirm, promptInput } from '../shared/prompt/index.js';
|
import { confirm, promptInput } from '../shared/prompt/index.js';
|
||||||
import { saveTaskFile, saveTaskFromInteractive } from '../features/tasks/add/index.js';
|
import { saveTaskFile, saveTaskFromInteractive } from '../features/tasks/add/index.js';
|
||||||
|
|
||||||
const mockSummarizeTaskName = vi.mocked(summarizeTaskName);
|
|
||||||
const mockSuccess = vi.mocked(success);
|
const mockSuccess = vi.mocked(success);
|
||||||
const mockInfo = vi.mocked(info);
|
const mockInfo = vi.mocked(info);
|
||||||
const mockConfirm = vi.mocked(confirm);
|
const mockConfirm = vi.mocked(confirm);
|
||||||
@ -44,10 +35,14 @@ const mockPromptInput = vi.mocked(promptInput);
|
|||||||
|
|
||||||
let testDir: string;
|
let testDir: string;
|
||||||
|
|
||||||
|
function loadTasks(testDir: string): { tasks: Array<Record<string, unknown>> } {
|
||||||
|
const raw = fs.readFileSync(path.join(testDir, '.takt', 'tasks.yaml'), 'utf-8');
|
||||||
|
return parseYaml(raw) as { tasks: Array<Record<string, unknown>> };
|
||||||
|
}
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.clearAllMocks();
|
vi.clearAllMocks();
|
||||||
testDir = fs.mkdtempSync(path.join(tmpdir(), 'takt-test-save-'));
|
testDir = fs.mkdtempSync(path.join(tmpdir(), 'takt-test-save-'));
|
||||||
mockSummarizeTaskName.mockResolvedValue('test-task');
|
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
@ -57,243 +52,74 @@ afterEach(() => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
describe('saveTaskFile', () => {
|
describe('saveTaskFile', () => {
|
||||||
it('should create task file with correct YAML content', async () => {
|
it('should append task to tasks.yaml', async () => {
|
||||||
// Given
|
const created = await saveTaskFile(testDir, 'Implement feature X\nDetails here');
|
||||||
const taskContent = 'Implement feature X\nDetails here';
|
|
||||||
|
|
||||||
// When
|
expect(created.taskName).toContain('implement-feature-x');
|
||||||
const filePath = await saveTaskFile(testDir, taskContent);
|
expect(created.tasksFile).toBe(path.join(testDir, '.takt', 'tasks.yaml'));
|
||||||
|
expect(fs.existsSync(created.tasksFile)).toBe(true);
|
||||||
|
|
||||||
// Then
|
const tasks = loadTasks(testDir).tasks;
|
||||||
expect(fs.existsSync(filePath)).toBe(true);
|
expect(tasks).toHaveLength(1);
|
||||||
const content = fs.readFileSync(filePath, 'utf-8');
|
expect(tasks[0]?.content).toContain('Implement feature X');
|
||||||
expect(content).toContain('Implement feature X');
|
|
||||||
expect(content).toContain('Details here');
|
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should create .takt/tasks directory if it does not exist', async () => {
|
it('should include optional fields', async () => {
|
||||||
// Given
|
await saveTaskFile(testDir, 'Task', {
|
||||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
piece: 'review',
|
||||||
expect(fs.existsSync(tasksDir)).toBe(false);
|
issue: 42,
|
||||||
|
worktree: true,
|
||||||
|
branch: 'feat/my-branch',
|
||||||
|
autoPr: false,
|
||||||
|
});
|
||||||
|
|
||||||
// When
|
const task = loadTasks(testDir).tasks[0]!;
|
||||||
await saveTaskFile(testDir, 'Task content');
|
expect(task.piece).toBe('review');
|
||||||
|
expect(task.issue).toBe(42);
|
||||||
// Then
|
expect(task.worktree).toBe(true);
|
||||||
expect(fs.existsSync(tasksDir)).toBe(true);
|
expect(task.branch).toBe('feat/my-branch');
|
||||||
|
expect(task.auto_pr).toBe(false);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should include piece in YAML when specified', async () => {
|
it('should generate unique names on duplicates', async () => {
|
||||||
// When
|
const first = await saveTaskFile(testDir, 'Same title');
|
||||||
const filePath = await saveTaskFile(testDir, 'Task', { piece: 'review' });
|
const second = await saveTaskFile(testDir, 'Same title');
|
||||||
|
|
||||||
// Then
|
expect(first.taskName).not.toBe(second.taskName);
|
||||||
const content = fs.readFileSync(filePath, 'utf-8');
|
|
||||||
expect(content).toContain('piece: review');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should include issue number in YAML when specified', async () => {
|
|
||||||
// When
|
|
||||||
const filePath = await saveTaskFile(testDir, 'Task', { issue: 42 });
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const content = fs.readFileSync(filePath, 'utf-8');
|
|
||||||
expect(content).toContain('issue: 42');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should include worktree in YAML when specified', async () => {
|
|
||||||
// When
|
|
||||||
const filePath = await saveTaskFile(testDir, 'Task', { worktree: true });
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const content = fs.readFileSync(filePath, 'utf-8');
|
|
||||||
expect(content).toContain('worktree: true');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should include branch in YAML when specified', async () => {
|
|
||||||
// When
|
|
||||||
const filePath = await saveTaskFile(testDir, 'Task', { branch: 'feat/my-branch' });
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const content = fs.readFileSync(filePath, 'utf-8');
|
|
||||||
expect(content).toContain('branch: feat/my-branch');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not include optional fields when not specified', async () => {
|
|
||||||
// When
|
|
||||||
const filePath = await saveTaskFile(testDir, 'Simple task');
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const content = fs.readFileSync(filePath, 'utf-8');
|
|
||||||
expect(content).not.toContain('piece:');
|
|
||||||
expect(content).not.toContain('issue:');
|
|
||||||
expect(content).not.toContain('worktree:');
|
|
||||||
expect(content).not.toContain('branch:');
|
|
||||||
expect(content).not.toContain('auto_pr:');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should include auto_pr in YAML when specified', async () => {
|
|
||||||
// When
|
|
||||||
const filePath = await saveTaskFile(testDir, 'Task', { autoPr: true });
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const content = fs.readFileSync(filePath, 'utf-8');
|
|
||||||
expect(content).toContain('auto_pr: true');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should include auto_pr: false in YAML when specified as false', async () => {
|
|
||||||
// When
|
|
||||||
const filePath = await saveTaskFile(testDir, 'Task', { autoPr: false });
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const content = fs.readFileSync(filePath, 'utf-8');
|
|
||||||
expect(content).toContain('auto_pr: false');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should use first line for filename generation', async () => {
|
|
||||||
// When
|
|
||||||
await saveTaskFile(testDir, 'First line\nSecond line');
|
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(mockSummarizeTaskName).toHaveBeenCalledWith('First line', { cwd: testDir });
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should handle duplicate filenames with counter', async () => {
|
|
||||||
// Given: first file already exists
|
|
||||||
await saveTaskFile(testDir, 'Task 1');
|
|
||||||
|
|
||||||
// When: second file with same slug
|
|
||||||
const filePath = await saveTaskFile(testDir, 'Task 2');
|
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(path.basename(filePath)).toBe('test-task-1.yaml');
|
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('saveTaskFromInteractive', () => {
|
describe('saveTaskFromInteractive', () => {
|
||||||
it('should save task with worktree settings when user confirms worktree', async () => {
|
it('should save task with worktree settings when user confirms', async () => {
|
||||||
// Given: user confirms worktree, accepts defaults, confirms auto-PR
|
mockConfirm.mockResolvedValueOnce(true);
|
||||||
mockConfirm.mockResolvedValueOnce(true); // Create worktree? → Yes
|
mockPromptInput.mockResolvedValueOnce('');
|
||||||
mockPromptInput.mockResolvedValueOnce(''); // Worktree path → auto
|
mockPromptInput.mockResolvedValueOnce('');
|
||||||
mockPromptInput.mockResolvedValueOnce(''); // Branch name → auto
|
mockConfirm.mockResolvedValueOnce(true);
|
||||||
mockConfirm.mockResolvedValueOnce(true); // Auto-create PR? → Yes
|
|
||||||
|
|
||||||
// When
|
|
||||||
await saveTaskFromInteractive(testDir, 'Task content');
|
await saveTaskFromInteractive(testDir, 'Task content');
|
||||||
|
|
||||||
// Then
|
expect(mockSuccess).toHaveBeenCalledWith(expect.stringContaining('Task created:'));
|
||||||
expect(mockSuccess).toHaveBeenCalledWith('Task created: test-task.yaml');
|
const task = loadTasks(testDir).tasks[0]!;
|
||||||
expect(mockInfo).toHaveBeenCalledWith(expect.stringContaining('Path:'));
|
expect(task.worktree).toBe(true);
|
||||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
expect(task.auto_pr).toBe(true);
|
||||||
const files = fs.readdirSync(tasksDir);
|
|
||||||
const content = fs.readFileSync(path.join(tasksDir, files[0]!), 'utf-8');
|
|
||||||
expect(content).toContain('worktree: true');
|
|
||||||
expect(content).toContain('auto_pr: true');
|
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should save task without worktree settings when user declines worktree', async () => {
|
it('should save task without worktree settings when declined', async () => {
|
||||||
// Given: user declines worktree
|
mockConfirm.mockResolvedValueOnce(false);
|
||||||
mockConfirm.mockResolvedValueOnce(false); // Create worktree? → No
|
|
||||||
|
|
||||||
// When
|
|
||||||
await saveTaskFromInteractive(testDir, 'Task content');
|
await saveTaskFromInteractive(testDir, 'Task content');
|
||||||
|
|
||||||
// Then
|
const task = loadTasks(testDir).tasks[0]!;
|
||||||
expect(mockSuccess).toHaveBeenCalledWith('Task created: test-task.yaml');
|
expect(task.worktree).toBeUndefined();
|
||||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
expect(task.branch).toBeUndefined();
|
||||||
const files = fs.readdirSync(tasksDir);
|
expect(task.auto_pr).toBeUndefined();
|
||||||
const content = fs.readFileSync(path.join(tasksDir, files[0]!), 'utf-8');
|
|
||||||
expect(content).not.toContain('worktree:');
|
|
||||||
expect(content).not.toContain('branch:');
|
|
||||||
expect(content).not.toContain('auto_pr:');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should save custom worktree path and branch when specified', async () => {
|
|
||||||
// Given: user specifies custom path and branch
|
|
||||||
mockConfirm.mockResolvedValueOnce(true); // Create worktree? → Yes
|
|
||||||
mockPromptInput.mockResolvedValueOnce('/custom/path'); // Worktree path
|
|
||||||
mockPromptInput.mockResolvedValueOnce('feat/branch'); // Branch name
|
|
||||||
mockConfirm.mockResolvedValueOnce(false); // Auto-create PR? → No
|
|
||||||
|
|
||||||
// When
|
|
||||||
await saveTaskFromInteractive(testDir, 'Task content');
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
|
||||||
const files = fs.readdirSync(tasksDir);
|
|
||||||
const content = fs.readFileSync(path.join(tasksDir, files[0]!), 'utf-8');
|
|
||||||
expect(content).toContain('worktree: /custom/path');
|
|
||||||
expect(content).toContain('branch: feat/branch');
|
|
||||||
expect(content).toContain('auto_pr: false');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should display worktree/branch/auto-PR info when settings are provided', async () => {
|
|
||||||
// Given
|
|
||||||
mockConfirm.mockResolvedValueOnce(true); // Create worktree? → Yes
|
|
||||||
mockPromptInput.mockResolvedValueOnce('/my/path'); // Worktree path
|
|
||||||
mockPromptInput.mockResolvedValueOnce('my-branch'); // Branch name
|
|
||||||
mockConfirm.mockResolvedValueOnce(true); // Auto-create PR? → Yes
|
|
||||||
|
|
||||||
// When
|
|
||||||
await saveTaskFromInteractive(testDir, 'Task content');
|
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(mockInfo).toHaveBeenCalledWith(' Worktree: /my/path');
|
|
||||||
expect(mockInfo).toHaveBeenCalledWith(' Branch: my-branch');
|
|
||||||
expect(mockInfo).toHaveBeenCalledWith(' Auto-PR: yes');
|
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should display piece info when specified', async () => {
|
it('should display piece info when specified', async () => {
|
||||||
// Given
|
mockConfirm.mockResolvedValueOnce(false);
|
||||||
mockConfirm.mockResolvedValueOnce(false); // Create worktree? → No
|
|
||||||
|
|
||||||
// When
|
|
||||||
await saveTaskFromInteractive(testDir, 'Task content', 'review');
|
await saveTaskFromInteractive(testDir, 'Task content', 'review');
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(mockInfo).toHaveBeenCalledWith(' Piece: review');
|
expect(mockInfo).toHaveBeenCalledWith(' Piece: review');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should include piece in saved YAML', async () => {
|
|
||||||
// Given
|
|
||||||
mockConfirm.mockResolvedValueOnce(false); // Create worktree? → No
|
|
||||||
|
|
||||||
// When
|
|
||||||
await saveTaskFromInteractive(testDir, 'Task content', 'custom');
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
|
||||||
const files = fs.readdirSync(tasksDir);
|
|
||||||
expect(files.length).toBe(1);
|
|
||||||
const content = fs.readFileSync(path.join(tasksDir, files[0]!), 'utf-8');
|
|
||||||
expect(content).toContain('piece: custom');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not display piece info when not specified', async () => {
|
|
||||||
// Given
|
|
||||||
mockConfirm.mockResolvedValueOnce(false); // Create worktree? → No
|
|
||||||
|
|
||||||
// When
|
|
||||||
await saveTaskFromInteractive(testDir, 'Task content');
|
|
||||||
|
|
||||||
// Then
|
|
||||||
const pieceInfoCalls = mockInfo.mock.calls.filter(
|
|
||||||
(call) => typeof call[0] === 'string' && call[0].includes('Piece:')
|
|
||||||
);
|
|
||||||
expect(pieceInfoCalls.length).toBe(0);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should display auto worktree info when no custom path', async () => {
|
|
||||||
// Given
|
|
||||||
mockConfirm.mockResolvedValueOnce(true); // Create worktree? → Yes
|
|
||||||
mockPromptInput.mockResolvedValueOnce(''); // Worktree path → auto
|
|
||||||
mockPromptInput.mockResolvedValueOnce(''); // Branch name → auto
|
|
||||||
mockConfirm.mockResolvedValueOnce(true); // Auto-create PR? → Yes
|
|
||||||
|
|
||||||
// When
|
|
||||||
await saveTaskFromInteractive(testDir, 'Task content');
|
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(mockInfo).toHaveBeenCalledWith(' Worktree: auto');
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
|
|||||||
@ -58,13 +58,26 @@ vi.mock('../features/pieceSelection/index.js', () => ({
|
|||||||
}));
|
}));
|
||||||
|
|
||||||
import { confirm } from '../shared/prompt/index.js';
|
import { confirm } from '../shared/prompt/index.js';
|
||||||
|
import {
|
||||||
|
getCurrentPiece,
|
||||||
|
loadAllPiecesWithSources,
|
||||||
|
getPieceCategories,
|
||||||
|
buildCategorizedPieces,
|
||||||
|
} from '../infra/config/index.js';
|
||||||
import { createSharedClone, autoCommitAndPush, summarizeTaskName } from '../infra/task/index.js';
|
import { createSharedClone, autoCommitAndPush, summarizeTaskName } from '../infra/task/index.js';
|
||||||
import { selectAndExecuteTask } from '../features/tasks/execute/selectAndExecute.js';
|
import { warnMissingPieces, selectPieceFromCategorizedPieces } from '../features/pieceSelection/index.js';
|
||||||
|
import { selectAndExecuteTask, determinePiece } from '../features/tasks/execute/selectAndExecute.js';
|
||||||
|
|
||||||
const mockConfirm = vi.mocked(confirm);
|
const mockConfirm = vi.mocked(confirm);
|
||||||
|
const mockGetCurrentPiece = vi.mocked(getCurrentPiece);
|
||||||
|
const mockLoadAllPiecesWithSources = vi.mocked(loadAllPiecesWithSources);
|
||||||
|
const mockGetPieceCategories = vi.mocked(getPieceCategories);
|
||||||
|
const mockBuildCategorizedPieces = vi.mocked(buildCategorizedPieces);
|
||||||
const mockCreateSharedClone = vi.mocked(createSharedClone);
|
const mockCreateSharedClone = vi.mocked(createSharedClone);
|
||||||
const mockAutoCommitAndPush = vi.mocked(autoCommitAndPush);
|
const mockAutoCommitAndPush = vi.mocked(autoCommitAndPush);
|
||||||
const mockSummarizeTaskName = vi.mocked(summarizeTaskName);
|
const mockSummarizeTaskName = vi.mocked(summarizeTaskName);
|
||||||
|
const mockWarnMissingPieces = vi.mocked(warnMissingPieces);
|
||||||
|
const mockSelectPieceFromCategorizedPieces = vi.mocked(selectPieceFromCategorizedPieces);
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.clearAllMocks();
|
vi.clearAllMocks();
|
||||||
@ -102,4 +115,45 @@ describe('resolveAutoPr default in selectAndExecuteTask', () => {
|
|||||||
expect(autoPrCall).toBeDefined();
|
expect(autoPrCall).toBeDefined();
|
||||||
expect(autoPrCall![1]).toBe(true);
|
expect(autoPrCall![1]).toBe(true);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should warn only user-origin missing pieces during interactive selection', async () => {
|
||||||
|
// Given: category selection is enabled and both builtin/user missing pieces exist
|
||||||
|
mockGetCurrentPiece.mockReturnValue('default');
|
||||||
|
mockLoadAllPiecesWithSources.mockReturnValue(new Map([
|
||||||
|
['default', {
|
||||||
|
source: 'builtin',
|
||||||
|
config: {
|
||||||
|
name: 'default',
|
||||||
|
movements: [],
|
||||||
|
initialMovement: 'start',
|
||||||
|
maxIterations: 1,
|
||||||
|
},
|
||||||
|
}],
|
||||||
|
]));
|
||||||
|
mockGetPieceCategories.mockReturnValue({
|
||||||
|
pieceCategories: [],
|
||||||
|
builtinPieceCategories: [],
|
||||||
|
userPieceCategories: [],
|
||||||
|
showOthersCategory: true,
|
||||||
|
othersCategoryName: 'Others',
|
||||||
|
});
|
||||||
|
mockBuildCategorizedPieces.mockReturnValue({
|
||||||
|
categories: [],
|
||||||
|
allPieces: new Map(),
|
||||||
|
missingPieces: [
|
||||||
|
{ categoryPath: ['Quick Start'], pieceName: 'default', source: 'builtin' },
|
||||||
|
{ categoryPath: ['Custom'], pieceName: 'my-missing', source: 'user' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
mockSelectPieceFromCategorizedPieces.mockResolvedValue('default');
|
||||||
|
|
||||||
|
// When
|
||||||
|
const selected = await determinePiece('/project');
|
||||||
|
|
||||||
|
// Then
|
||||||
|
expect(selected).toBe('default');
|
||||||
|
expect(mockWarnMissingPieces).toHaveBeenCalledWith([
|
||||||
|
{ categoryPath: ['Custom'], pieceName: 'my-missing', source: 'user' },
|
||||||
|
]);
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
53
src/__tests__/slug.test.ts
Normal file
53
src/__tests__/slug.test.ts
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for slugify utility
|
||||||
|
*
|
||||||
|
* Tests URL/filename-safe slug generation with CJK support.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import { slugify } from '../shared/utils/slug.js';
|
||||||
|
|
||||||
|
describe('slugify', () => {
|
||||||
|
it('should convert to lowercase', () => {
|
||||||
|
expect(slugify('Hello World')).toBe('hello-world');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace non-alphanumeric characters with hyphens', () => {
|
||||||
|
expect(slugify('foo bar_baz')).toBe('foo-bar-baz');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should collapse consecutive special characters into single hyphen', () => {
|
||||||
|
expect(slugify('foo---bar baz')).toBe('foo-bar-baz');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should strip leading and trailing hyphens', () => {
|
||||||
|
expect(slugify('--hello--')).toBe('hello');
|
||||||
|
expect(slugify(' hello ')).toBe('hello');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should truncate to 50 characters', () => {
|
||||||
|
const long = 'a'.repeat(100);
|
||||||
|
expect(slugify(long).length).toBeLessThanOrEqual(50);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve CJK characters', () => {
|
||||||
|
expect(slugify('タスク指示書')).toBe('タスク指示書');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle mixed ASCII and CJK', () => {
|
||||||
|
expect(slugify('Add タスク Feature')).toBe('add-タスク-feature');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle numbers', () => {
|
||||||
|
expect(slugify('issue 123')).toBe('issue-123');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle empty result after stripping', () => {
|
||||||
|
// All special characters → becomes empty string
|
||||||
|
expect(slugify('!@#$%')).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle typical GitHub issue titles', () => {
|
||||||
|
expect(slugify('Fix: login not working (#42)')).toBe('fix-login-not-working-42');
|
||||||
|
});
|
||||||
|
});
|
||||||
227
src/__tests__/state-manager.test.ts
Normal file
227
src/__tests__/state-manager.test.ts
Normal file
@ -0,0 +1,227 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for StateManager
|
||||||
|
*
|
||||||
|
* Tests piece state initialization, user input management,
|
||||||
|
* movement iteration tracking, and output retrieval.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import {
|
||||||
|
StateManager,
|
||||||
|
createInitialState,
|
||||||
|
incrementMovementIteration,
|
||||||
|
addUserInput,
|
||||||
|
getPreviousOutput,
|
||||||
|
} from '../core/piece/engine/state-manager.js';
|
||||||
|
import { MAX_USER_INPUTS, MAX_INPUT_LENGTH } from '../core/piece/constants.js';
|
||||||
|
import type { PieceConfig, AgentResponse, PieceState } from '../core/models/types.js';
|
||||||
|
import type { PieceEngineOptions } from '../core/piece/types.js';
|
||||||
|
|
||||||
|
function makeConfig(overrides: Partial<PieceConfig> = {}): PieceConfig {
|
||||||
|
return {
|
||||||
|
name: 'test-piece',
|
||||||
|
movements: [],
|
||||||
|
initialMovement: 'start',
|
||||||
|
maxIterations: 10,
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeOptions(overrides: Partial<PieceEngineOptions> = {}): PieceEngineOptions {
|
||||||
|
return {
|
||||||
|
projectCwd: '/tmp/project',
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeResponse(content: string): AgentResponse {
|
||||||
|
return {
|
||||||
|
persona: 'tester',
|
||||||
|
status: 'done',
|
||||||
|
content,
|
||||||
|
timestamp: new Date(),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('StateManager', () => {
|
||||||
|
describe('constructor', () => {
|
||||||
|
it('should initialize state with config defaults', () => {
|
||||||
|
const manager = new StateManager(makeConfig(), makeOptions());
|
||||||
|
|
||||||
|
expect(manager.state.pieceName).toBe('test-piece');
|
||||||
|
expect(manager.state.currentMovement).toBe('start');
|
||||||
|
expect(manager.state.iteration).toBe(0);
|
||||||
|
expect(manager.state.status).toBe('running');
|
||||||
|
expect(manager.state.userInputs).toEqual([]);
|
||||||
|
expect(manager.state.movementOutputs.size).toBe(0);
|
||||||
|
expect(manager.state.personaSessions.size).toBe(0);
|
||||||
|
expect(manager.state.movementIterations.size).toBe(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should use startMovement option when provided', () => {
|
||||||
|
const manager = new StateManager(
|
||||||
|
makeConfig(),
|
||||||
|
makeOptions({ startMovement: 'custom-start' }),
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(manager.state.currentMovement).toBe('custom-start');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should restore initial sessions from options', () => {
|
||||||
|
const manager = new StateManager(
|
||||||
|
makeConfig(),
|
||||||
|
makeOptions({
|
||||||
|
initialSessions: { coder: 'session-1', reviewer: 'session-2' },
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(manager.state.personaSessions.get('coder')).toBe('session-1');
|
||||||
|
expect(manager.state.personaSessions.get('reviewer')).toBe('session-2');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should restore initial user inputs from options', () => {
|
||||||
|
const manager = new StateManager(
|
||||||
|
makeConfig(),
|
||||||
|
makeOptions({
|
||||||
|
initialUserInputs: ['input1', 'input2'],
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(manager.state.userInputs).toEqual(['input1', 'input2']);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('incrementMovementIteration', () => {
|
||||||
|
it('should start at 1 for new movement', () => {
|
||||||
|
const manager = new StateManager(makeConfig(), makeOptions());
|
||||||
|
const count = manager.incrementMovementIteration('review');
|
||||||
|
expect(count).toBe(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should increment correctly for repeated movements', () => {
|
||||||
|
const manager = new StateManager(makeConfig(), makeOptions());
|
||||||
|
manager.incrementMovementIteration('review');
|
||||||
|
manager.incrementMovementIteration('review');
|
||||||
|
const count = manager.incrementMovementIteration('review');
|
||||||
|
expect(count).toBe(3);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should track different movements independently', () => {
|
||||||
|
const manager = new StateManager(makeConfig(), makeOptions());
|
||||||
|
manager.incrementMovementIteration('review');
|
||||||
|
manager.incrementMovementIteration('review');
|
||||||
|
manager.incrementMovementIteration('implement');
|
||||||
|
expect(manager.state.movementIterations.get('review')).toBe(2);
|
||||||
|
expect(manager.state.movementIterations.get('implement')).toBe(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('addUserInput', () => {
|
||||||
|
it('should add input to state', () => {
|
||||||
|
const manager = new StateManager(makeConfig(), makeOptions());
|
||||||
|
manager.addUserInput('hello');
|
||||||
|
expect(manager.state.userInputs).toEqual(['hello']);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should truncate input exceeding max length', () => {
|
||||||
|
const manager = new StateManager(makeConfig(), makeOptions());
|
||||||
|
const longInput = 'x'.repeat(MAX_INPUT_LENGTH + 100);
|
||||||
|
manager.addUserInput(longInput);
|
||||||
|
expect(manager.state.userInputs[0]!.length).toBe(MAX_INPUT_LENGTH);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should evict oldest input when exceeding max inputs', () => {
|
||||||
|
const manager = new StateManager(makeConfig(), makeOptions());
|
||||||
|
for (let i = 0; i < MAX_USER_INPUTS; i++) {
|
||||||
|
manager.addUserInput(`input-${i}`);
|
||||||
|
}
|
||||||
|
expect(manager.state.userInputs.length).toBe(MAX_USER_INPUTS);
|
||||||
|
|
||||||
|
manager.addUserInput('overflow');
|
||||||
|
expect(manager.state.userInputs.length).toBe(MAX_USER_INPUTS);
|
||||||
|
expect(manager.state.userInputs[0]).toBe('input-1');
|
||||||
|
expect(manager.state.userInputs[manager.state.userInputs.length - 1]).toBe('overflow');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('getPreviousOutput', () => {
|
||||||
|
it('should return undefined when no outputs exist', () => {
|
||||||
|
const manager = new StateManager(makeConfig(), makeOptions());
|
||||||
|
expect(manager.getPreviousOutput()).toBeUndefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return the last output from movementOutputs', () => {
|
||||||
|
const manager = new StateManager(makeConfig(), makeOptions());
|
||||||
|
const response1 = makeResponse('first');
|
||||||
|
const response2 = makeResponse('second');
|
||||||
|
manager.state.movementOutputs.set('step-1', response1);
|
||||||
|
manager.state.movementOutputs.set('step-2', response2);
|
||||||
|
expect(manager.getPreviousOutput()?.content).toBe('second');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('standalone functions', () => {
|
||||||
|
function makeState(): PieceState {
|
||||||
|
return {
|
||||||
|
pieceName: 'test',
|
||||||
|
currentMovement: 'start',
|
||||||
|
iteration: 0,
|
||||||
|
movementOutputs: new Map(),
|
||||||
|
userInputs: [],
|
||||||
|
personaSessions: new Map(),
|
||||||
|
movementIterations: new Map(),
|
||||||
|
status: 'running',
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('createInitialState', () => {
|
||||||
|
it('should create state from config and options', () => {
|
||||||
|
const state = createInitialState(makeConfig(), makeOptions());
|
||||||
|
expect(state.pieceName).toBe('test-piece');
|
||||||
|
expect(state.currentMovement).toBe('start');
|
||||||
|
expect(state.status).toBe('running');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('incrementMovementIteration (standalone)', () => {
|
||||||
|
it('should increment counter on state', () => {
|
||||||
|
const state = makeState();
|
||||||
|
expect(incrementMovementIteration(state, 'review')).toBe(1);
|
||||||
|
expect(incrementMovementIteration(state, 'review')).toBe(2);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('addUserInput (standalone)', () => {
|
||||||
|
it('should add input and truncate', () => {
|
||||||
|
const state = makeState();
|
||||||
|
addUserInput(state, 'test input');
|
||||||
|
expect(state.userInputs).toEqual(['test input']);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('getPreviousOutput (standalone)', () => {
|
||||||
|
it('should prefer lastOutput over movementOutputs', () => {
|
||||||
|
const state = makeState();
|
||||||
|
const lastOutput = makeResponse('last');
|
||||||
|
const mapOutput = makeResponse('from-map');
|
||||||
|
state.lastOutput = lastOutput;
|
||||||
|
state.movementOutputs.set('step-1', mapOutput);
|
||||||
|
|
||||||
|
expect(getPreviousOutput(state)?.content).toBe('last');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should fall back to movementOutputs when lastOutput is undefined', () => {
|
||||||
|
const state = makeState();
|
||||||
|
const mapOutput = makeResponse('from-map');
|
||||||
|
state.movementOutputs.set('step-1', mapOutput);
|
||||||
|
|
||||||
|
expect(getPreviousOutput(state)?.content).toBe('from-map');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return undefined when both are empty', () => {
|
||||||
|
const state = makeState();
|
||||||
|
expect(getPreviousOutput(state)).toBeUndefined();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
90
src/__tests__/switchPiece.test.ts
Normal file
90
src/__tests__/switchPiece.test.ts
Normal file
@ -0,0 +1,90 @@
|
|||||||
|
/**
|
||||||
|
* Tests for switchPiece behavior.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||||
|
|
||||||
|
vi.mock('../infra/config/index.js', () => ({
|
||||||
|
listPieceEntries: vi.fn(() => []),
|
||||||
|
loadAllPiecesWithSources: vi.fn(() => new Map()),
|
||||||
|
getPieceCategories: vi.fn(() => null),
|
||||||
|
buildCategorizedPieces: vi.fn(),
|
||||||
|
loadPiece: vi.fn(() => null),
|
||||||
|
getCurrentPiece: vi.fn(() => 'default'),
|
||||||
|
setCurrentPiece: vi.fn(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../features/pieceSelection/index.js', () => ({
|
||||||
|
warnMissingPieces: vi.fn(),
|
||||||
|
selectPieceFromCategorizedPieces: vi.fn(),
|
||||||
|
selectPieceFromEntries: vi.fn(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../shared/ui/index.js', () => ({
|
||||||
|
info: vi.fn(),
|
||||||
|
success: vi.fn(),
|
||||||
|
error: vi.fn(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
import {
|
||||||
|
loadAllPiecesWithSources,
|
||||||
|
getPieceCategories,
|
||||||
|
buildCategorizedPieces,
|
||||||
|
} from '../infra/config/index.js';
|
||||||
|
import {
|
||||||
|
warnMissingPieces,
|
||||||
|
selectPieceFromCategorizedPieces,
|
||||||
|
} from '../features/pieceSelection/index.js';
|
||||||
|
import { switchPiece } from '../features/config/switchPiece.js';
|
||||||
|
|
||||||
|
const mockLoadAllPiecesWithSources = vi.mocked(loadAllPiecesWithSources);
|
||||||
|
const mockGetPieceCategories = vi.mocked(getPieceCategories);
|
||||||
|
const mockBuildCategorizedPieces = vi.mocked(buildCategorizedPieces);
|
||||||
|
const mockWarnMissingPieces = vi.mocked(warnMissingPieces);
|
||||||
|
const mockSelectPieceFromCategorizedPieces = vi.mocked(selectPieceFromCategorizedPieces);
|
||||||
|
|
||||||
|
describe('switchPiece', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.clearAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should warn only user-origin missing pieces during interactive switch', async () => {
|
||||||
|
// Given
|
||||||
|
mockLoadAllPiecesWithSources.mockReturnValue(new Map([
|
||||||
|
['default', {
|
||||||
|
source: 'builtin',
|
||||||
|
config: {
|
||||||
|
name: 'default',
|
||||||
|
movements: [],
|
||||||
|
initialMovement: 'start',
|
||||||
|
maxIterations: 1,
|
||||||
|
},
|
||||||
|
}],
|
||||||
|
]));
|
||||||
|
mockGetPieceCategories.mockReturnValue({
|
||||||
|
pieceCategories: [],
|
||||||
|
builtinPieceCategories: [],
|
||||||
|
userPieceCategories: [],
|
||||||
|
showOthersCategory: true,
|
||||||
|
othersCategoryName: 'Others',
|
||||||
|
});
|
||||||
|
mockBuildCategorizedPieces.mockReturnValue({
|
||||||
|
categories: [],
|
||||||
|
allPieces: new Map(),
|
||||||
|
missingPieces: [
|
||||||
|
{ categoryPath: ['Quick Start'], pieceName: 'default', source: 'builtin' },
|
||||||
|
{ categoryPath: ['Custom'], pieceName: 'my-missing', source: 'user' },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
mockSelectPieceFromCategorizedPieces.mockResolvedValue(null);
|
||||||
|
|
||||||
|
// When
|
||||||
|
const switched = await switchPiece('/project');
|
||||||
|
|
||||||
|
// Then
|
||||||
|
expect(switched).toBe(false);
|
||||||
|
expect(mockWarnMissingPieces).toHaveBeenCalledWith([
|
||||||
|
{ categoryPath: ['Custom'], pieceName: 'my-missing', source: 'user' },
|
||||||
|
]);
|
||||||
|
});
|
||||||
|
});
|
||||||
224
src/__tests__/task-schema.test.ts
Normal file
224
src/__tests__/task-schema.test.ts
Normal file
@ -0,0 +1,224 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for task schema validation
|
||||||
|
*
|
||||||
|
* Tests TaskRecordSchema cross-field validation rules (status-dependent constraints).
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import {
|
||||||
|
TaskRecordSchema,
|
||||||
|
TaskFileSchema,
|
||||||
|
TaskExecutionConfigSchema,
|
||||||
|
} from '../infra/task/schema.js';
|
||||||
|
|
||||||
|
function makePendingRecord() {
|
||||||
|
return {
|
||||||
|
name: 'test-task',
|
||||||
|
status: 'pending' as const,
|
||||||
|
content: 'task content',
|
||||||
|
created_at: '2025-01-01T00:00:00.000Z',
|
||||||
|
started_at: null,
|
||||||
|
completed_at: null,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeRunningRecord() {
|
||||||
|
return {
|
||||||
|
name: 'test-task',
|
||||||
|
status: 'running' as const,
|
||||||
|
content: 'task content',
|
||||||
|
created_at: '2025-01-01T00:00:00.000Z',
|
||||||
|
started_at: '2025-01-01T01:00:00.000Z',
|
||||||
|
completed_at: null,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeCompletedRecord() {
|
||||||
|
return {
|
||||||
|
name: 'test-task',
|
||||||
|
status: 'completed' as const,
|
||||||
|
content: 'task content',
|
||||||
|
created_at: '2025-01-01T00:00:00.000Z',
|
||||||
|
started_at: '2025-01-01T01:00:00.000Z',
|
||||||
|
completed_at: '2025-01-01T02:00:00.000Z',
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeFailedRecord() {
|
||||||
|
return {
|
||||||
|
name: 'test-task',
|
||||||
|
status: 'failed' as const,
|
||||||
|
content: 'task content',
|
||||||
|
created_at: '2025-01-01T00:00:00.000Z',
|
||||||
|
started_at: '2025-01-01T01:00:00.000Z',
|
||||||
|
completed_at: '2025-01-01T02:00:00.000Z',
|
||||||
|
failure: { error: 'something went wrong' },
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('TaskExecutionConfigSchema', () => {
|
||||||
|
it('should accept valid config with all optional fields', () => {
|
||||||
|
const config = {
|
||||||
|
worktree: true,
|
||||||
|
branch: 'feature/test',
|
||||||
|
piece: 'unit-test',
|
||||||
|
issue: 42,
|
||||||
|
start_movement: 'plan',
|
||||||
|
retry_note: 'retry after fix',
|
||||||
|
auto_pr: true,
|
||||||
|
};
|
||||||
|
expect(() => TaskExecutionConfigSchema.parse(config)).not.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should accept empty config (all fields optional)', () => {
|
||||||
|
expect(() => TaskExecutionConfigSchema.parse({})).not.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should accept worktree as string', () => {
|
||||||
|
expect(() => TaskExecutionConfigSchema.parse({ worktree: '/custom/path' })).not.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject negative issue number', () => {
|
||||||
|
expect(() => TaskExecutionConfigSchema.parse({ issue: -1 })).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject non-integer issue number', () => {
|
||||||
|
expect(() => TaskExecutionConfigSchema.parse({ issue: 1.5 })).toThrow();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('TaskFileSchema', () => {
|
||||||
|
it('should accept valid task with required fields', () => {
|
||||||
|
expect(() => TaskFileSchema.parse({ task: 'do something' })).not.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject empty task string', () => {
|
||||||
|
expect(() => TaskFileSchema.parse({ task: '' })).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject missing task field', () => {
|
||||||
|
expect(() => TaskFileSchema.parse({})).toThrow();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('TaskRecordSchema', () => {
|
||||||
|
describe('pending status', () => {
|
||||||
|
it('should accept valid pending record', () => {
|
||||||
|
expect(() => TaskRecordSchema.parse(makePendingRecord())).not.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject pending record with started_at', () => {
|
||||||
|
const record = { ...makePendingRecord(), started_at: '2025-01-01T01:00:00.000Z' };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject pending record with completed_at', () => {
|
||||||
|
const record = { ...makePendingRecord(), completed_at: '2025-01-01T02:00:00.000Z' };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject pending record with failure', () => {
|
||||||
|
const record = { ...makePendingRecord(), failure: { error: 'fail' } };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject pending record with owner_pid', () => {
|
||||||
|
const record = { ...makePendingRecord(), owner_pid: 1234 };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('running status', () => {
|
||||||
|
it('should accept valid running record', () => {
|
||||||
|
expect(() => TaskRecordSchema.parse(makeRunningRecord())).not.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject running record without started_at', () => {
|
||||||
|
const record = { ...makeRunningRecord(), started_at: null };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject running record with completed_at', () => {
|
||||||
|
const record = { ...makeRunningRecord(), completed_at: '2025-01-01T02:00:00.000Z' };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject running record with failure', () => {
|
||||||
|
const record = { ...makeRunningRecord(), failure: { error: 'fail' } };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should accept running record with owner_pid', () => {
|
||||||
|
const record = { ...makeRunningRecord(), owner_pid: 5678 };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).not.toThrow();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('completed status', () => {
|
||||||
|
it('should accept valid completed record', () => {
|
||||||
|
expect(() => TaskRecordSchema.parse(makeCompletedRecord())).not.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject completed record without started_at', () => {
|
||||||
|
const record = { ...makeCompletedRecord(), started_at: null };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject completed record without completed_at', () => {
|
||||||
|
const record = { ...makeCompletedRecord(), completed_at: null };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject completed record with failure', () => {
|
||||||
|
const record = { ...makeCompletedRecord(), failure: { error: 'fail' } };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject completed record with owner_pid', () => {
|
||||||
|
const record = { ...makeCompletedRecord(), owner_pid: 1234 };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('failed status', () => {
|
||||||
|
it('should accept valid failed record', () => {
|
||||||
|
expect(() => TaskRecordSchema.parse(makeFailedRecord())).not.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject failed record without started_at', () => {
|
||||||
|
const record = { ...makeFailedRecord(), started_at: null };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject failed record without completed_at', () => {
|
||||||
|
const record = { ...makeFailedRecord(), completed_at: null };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject failed record without failure', () => {
|
||||||
|
const record = { ...makeFailedRecord(), failure: undefined };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject failed record with owner_pid', () => {
|
||||||
|
const record = { ...makeFailedRecord(), owner_pid: 1234 };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('content requirement', () => {
|
||||||
|
it('should accept record with content', () => {
|
||||||
|
expect(() => TaskRecordSchema.parse(makePendingRecord())).not.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should accept record with content_file', () => {
|
||||||
|
const record = { ...makePendingRecord(), content: undefined, content_file: './task.md' };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).not.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject record with neither content nor content_file', () => {
|
||||||
|
const record = { ...makePendingRecord(), content: undefined };
|
||||||
|
expect(() => TaskRecordSchema.parse(record)).toThrow();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
@ -1,59 +1,34 @@
|
|||||||
/**
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
* Task runner tests
|
import { mkdirSync, writeFileSync, existsSync, rmSync, readFileSync } from 'node:fs';
|
||||||
*/
|
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
|
||||||
import { mkdirSync, writeFileSync, existsSync, rmSync, readFileSync, readdirSync } from 'node:fs';
|
|
||||||
import { join } from 'node:path';
|
import { join } from 'node:path';
|
||||||
|
import { parse as parseYaml, stringify as stringifyYaml } from 'yaml';
|
||||||
import { TaskRunner } from '../infra/task/runner.js';
|
import { TaskRunner } from '../infra/task/runner.js';
|
||||||
import { isTaskFile, parseTaskFiles } from '../infra/task/parser.js';
|
import { TaskRecordSchema } from '../infra/task/schema.js';
|
||||||
|
|
||||||
describe('isTaskFile', () => {
|
function loadTasksFile(testDir: string): { tasks: Array<Record<string, unknown>> } {
|
||||||
it('should accept .yaml files', () => {
|
const raw = readFileSync(join(testDir, '.takt', 'tasks.yaml'), 'utf-8');
|
||||||
expect(isTaskFile('task.yaml')).toBe(true);
|
return parseYaml(raw) as { tasks: Array<Record<string, unknown>> };
|
||||||
});
|
}
|
||||||
|
|
||||||
it('should accept .yml files', () => {
|
function writeTasksFile(testDir: string, tasks: Array<Record<string, unknown>>): void {
|
||||||
expect(isTaskFile('task.yml')).toBe(true);
|
mkdirSync(join(testDir, '.takt'), { recursive: true });
|
||||||
});
|
writeFileSync(join(testDir, '.takt', 'tasks.yaml'), stringifyYaml({ tasks }), 'utf-8');
|
||||||
|
}
|
||||||
|
|
||||||
it('should accept .md files', () => {
|
function createPendingRecord(overrides: Record<string, unknown>): Record<string, unknown> {
|
||||||
expect(isTaskFile('task.md')).toBe(true);
|
return TaskRecordSchema.parse({
|
||||||
});
|
name: 'task-a',
|
||||||
|
status: 'pending',
|
||||||
|
content: 'Do work',
|
||||||
|
created_at: '2026-02-09T00:00:00.000Z',
|
||||||
|
started_at: null,
|
||||||
|
completed_at: null,
|
||||||
|
owner_pid: null,
|
||||||
|
...overrides,
|
||||||
|
}) as unknown as Record<string, unknown>;
|
||||||
|
}
|
||||||
|
|
||||||
it('should reject extensionless files like TASK-FORMAT', () => {
|
describe('TaskRunner (tasks.yaml)', () => {
|
||||||
expect(isTaskFile('TASK-FORMAT')).toBe(false);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should reject .txt files', () => {
|
|
||||||
expect(isTaskFile('readme.txt')).toBe(false);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('parseTaskFiles', () => {
|
|
||||||
const testDir = `/tmp/takt-parse-test-${Date.now()}`;
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
mkdirSync(testDir, { recursive: true });
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
if (existsSync(testDir)) {
|
|
||||||
rmSync(testDir, { recursive: true, force: true });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should ignore extensionless files like TASK-FORMAT', () => {
|
|
||||||
writeFileSync(join(testDir, 'TASK-FORMAT'), 'Format documentation');
|
|
||||||
writeFileSync(join(testDir, 'real-task.md'), 'Real task');
|
|
||||||
|
|
||||||
const tasks = parseTaskFiles(testDir);
|
|
||||||
expect(tasks).toHaveLength(1);
|
|
||||||
expect(tasks[0]?.name).toBe('real-task');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('TaskRunner', () => {
|
|
||||||
const testDir = `/tmp/takt-task-test-${Date.now()}`;
|
const testDir = `/tmp/takt-task-test-${Date.now()}`;
|
||||||
let runner: TaskRunner;
|
let runner: TaskRunner;
|
||||||
|
|
||||||
@ -68,465 +43,245 @@ describe('TaskRunner', () => {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('ensureDirs', () => {
|
it('should add tasks to .takt/tasks.yaml', () => {
|
||||||
it('should create tasks, completed, and failed directories', () => {
|
const task = runner.addTask('Fix login flow', { piece: 'default' });
|
||||||
runner.ensureDirs();
|
expect(task.name).toContain('fix-login-flow');
|
||||||
expect(existsSync(join(testDir, '.takt', 'tasks'))).toBe(true);
|
expect(existsSync(join(testDir, '.takt', 'tasks.yaml'))).toBe(true);
|
||||||
expect(existsSync(join(testDir, '.takt', 'completed'))).toBe(true);
|
|
||||||
expect(existsSync(join(testDir, '.takt', 'failed'))).toBe(true);
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('listTasks', () => {
|
it('should list only pending tasks', () => {
|
||||||
it('should return empty array when no tasks', () => {
|
runner.addTask('Task A');
|
||||||
const tasks = runner.listTasks();
|
runner.addTask('Task B');
|
||||||
expect(tasks).toEqual([]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should list tasks sorted by name', () => {
|
const tasks = runner.listTasks();
|
||||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
expect(tasks).toHaveLength(2);
|
||||||
mkdirSync(tasksDir, { recursive: true });
|
expect(tasks.every((task) => task.status === 'pending')).toBe(true);
|
||||||
writeFileSync(join(tasksDir, '02-second.md'), 'Second task');
|
|
||||||
writeFileSync(join(tasksDir, '01-first.md'), 'First task');
|
|
||||||
writeFileSync(join(tasksDir, '03-third.md'), 'Third task');
|
|
||||||
|
|
||||||
const tasks = runner.listTasks();
|
|
||||||
expect(tasks).toHaveLength(3);
|
|
||||||
expect(tasks[0]?.name).toBe('01-first');
|
|
||||||
expect(tasks[1]?.name).toBe('02-second');
|
|
||||||
expect(tasks[2]?.name).toBe('03-third');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should only list .md files', () => {
|
|
||||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
|
||||||
mkdirSync(tasksDir, { recursive: true });
|
|
||||||
writeFileSync(join(tasksDir, 'task.md'), 'Task content');
|
|
||||||
writeFileSync(join(tasksDir, 'readme.txt'), 'Not a task');
|
|
||||||
|
|
||||||
const tasks = runner.listTasks();
|
|
||||||
expect(tasks).toHaveLength(1);
|
|
||||||
expect(tasks[0]?.name).toBe('task');
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('getTask', () => {
|
it('should claim tasks and mark them running', () => {
|
||||||
it('should return null for non-existent task', () => {
|
runner.addTask('Task A');
|
||||||
const task = runner.getTask('non-existent');
|
runner.addTask('Task B');
|
||||||
expect(task).toBeNull();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return task info for existing task', () => {
|
const claimed = runner.claimNextTasks(1);
|
||||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
expect(claimed).toHaveLength(1);
|
||||||
mkdirSync(tasksDir, { recursive: true });
|
expect(claimed[0]?.status).toBe('running');
|
||||||
writeFileSync(join(tasksDir, 'my-task.md'), 'Task content');
|
|
||||||
|
|
||||||
const task = runner.getTask('my-task');
|
const file = loadTasksFile(testDir);
|
||||||
expect(task).not.toBeNull();
|
expect(file.tasks.some((task) => task.status === 'running')).toBe(true);
|
||||||
expect(task?.name).toBe('my-task');
|
|
||||||
expect(task?.content).toBe('Task content');
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('getNextTask', () => {
|
it('should recover interrupted running tasks to pending', () => {
|
||||||
it('should return null when no tasks', () => {
|
runner.addTask('Task A');
|
||||||
const task = runner.getNextTask();
|
runner.claimNextTasks(1);
|
||||||
expect(task).toBeNull();
|
const current = loadTasksFile(testDir);
|
||||||
});
|
const running = current.tasks[0] as Record<string, unknown>;
|
||||||
|
running.owner_pid = 999999999;
|
||||||
|
writeFileSync(join(testDir, '.takt', 'tasks.yaml'), stringifyYaml(current), 'utf-8');
|
||||||
|
|
||||||
it('should return first task (alphabetically)', () => {
|
const recovered = runner.recoverInterruptedRunningTasks();
|
||||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
expect(recovered).toBe(1);
|
||||||
mkdirSync(tasksDir, { recursive: true });
|
|
||||||
writeFileSync(join(tasksDir, 'b-task.md'), 'B');
|
|
||||||
writeFileSync(join(tasksDir, 'a-task.md'), 'A');
|
|
||||||
|
|
||||||
const task = runner.getNextTask();
|
const tasks = runner.listTasks();
|
||||||
expect(task?.name).toBe('a-task');
|
expect(tasks).toHaveLength(1);
|
||||||
});
|
expect(tasks[0]?.status).toBe('pending');
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('claimNextTasks', () => {
|
it('should keep running tasks owned by a live process', () => {
|
||||||
it('should return empty array when no tasks', () => {
|
runner.addTask('Task A');
|
||||||
const tasks = runner.claimNextTasks(3);
|
runner.claimNextTasks(1);
|
||||||
expect(tasks).toEqual([]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return tasks up to the requested count', () => {
|
const recovered = runner.recoverInterruptedRunningTasks();
|
||||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
expect(recovered).toBe(0);
|
||||||
mkdirSync(tasksDir, { recursive: true });
|
|
||||||
writeFileSync(join(tasksDir, 'a-task.md'), 'A');
|
|
||||||
writeFileSync(join(tasksDir, 'b-task.md'), 'B');
|
|
||||||
writeFileSync(join(tasksDir, 'c-task.md'), 'C');
|
|
||||||
|
|
||||||
const tasks = runner.claimNextTasks(2);
|
|
||||||
expect(tasks).toHaveLength(2);
|
|
||||||
expect(tasks[0]?.name).toBe('a-task');
|
|
||||||
expect(tasks[1]?.name).toBe('b-task');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not return already claimed tasks on subsequent calls', () => {
|
|
||||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
|
||||||
mkdirSync(tasksDir, { recursive: true });
|
|
||||||
writeFileSync(join(tasksDir, 'a-task.md'), 'A');
|
|
||||||
writeFileSync(join(tasksDir, 'b-task.md'), 'B');
|
|
||||||
writeFileSync(join(tasksDir, 'c-task.md'), 'C');
|
|
||||||
|
|
||||||
// Given: first call claims a-task
|
|
||||||
const first = runner.claimNextTasks(1);
|
|
||||||
expect(first).toHaveLength(1);
|
|
||||||
expect(first[0]?.name).toBe('a-task');
|
|
||||||
|
|
||||||
// When: second call should skip a-task
|
|
||||||
const second = runner.claimNextTasks(1);
|
|
||||||
expect(second).toHaveLength(1);
|
|
||||||
expect(second[0]?.name).toBe('b-task');
|
|
||||||
|
|
||||||
// When: third call should skip a-task and b-task
|
|
||||||
const third = runner.claimNextTasks(1);
|
|
||||||
expect(third).toHaveLength(1);
|
|
||||||
expect(third[0]?.name).toBe('c-task');
|
|
||||||
|
|
||||||
// When: fourth call should return empty (all claimed)
|
|
||||||
const fourth = runner.claimNextTasks(1);
|
|
||||||
expect(fourth).toEqual([]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should release claim after completeTask', () => {
|
|
||||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
|
||||||
mkdirSync(tasksDir, { recursive: true });
|
|
||||||
writeFileSync(join(tasksDir, 'task-a.md'), 'Task A content');
|
|
||||||
|
|
||||||
// Given: claim the task
|
|
||||||
const claimed = runner.claimNextTasks(1);
|
|
||||||
expect(claimed).toHaveLength(1);
|
|
||||||
|
|
||||||
// When: complete the task (file is moved away)
|
|
||||||
runner.completeTask({
|
|
||||||
task: claimed[0]!,
|
|
||||||
success: true,
|
|
||||||
response: 'Done',
|
|
||||||
executionLog: [],
|
|
||||||
startedAt: '2024-01-01T00:00:00.000Z',
|
|
||||||
completedAt: '2024-01-01T00:01:00.000Z',
|
|
||||||
});
|
|
||||||
|
|
||||||
// Then: claim set no longer blocks (but file is moved, so no tasks anyway)
|
|
||||||
const next = runner.claimNextTasks(1);
|
|
||||||
expect(next).toEqual([]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should release claim after failTask', () => {
|
|
||||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
|
||||||
mkdirSync(tasksDir, { recursive: true });
|
|
||||||
writeFileSync(join(tasksDir, 'task-a.md'), 'Task A content');
|
|
||||||
|
|
||||||
// Given: claim the task
|
|
||||||
const claimed = runner.claimNextTasks(1);
|
|
||||||
expect(claimed).toHaveLength(1);
|
|
||||||
|
|
||||||
// When: fail the task (file is moved away)
|
|
||||||
runner.failTask({
|
|
||||||
task: claimed[0]!,
|
|
||||||
success: false,
|
|
||||||
response: 'Error',
|
|
||||||
executionLog: [],
|
|
||||||
startedAt: '2024-01-01T00:00:00.000Z',
|
|
||||||
completedAt: '2024-01-01T00:01:00.000Z',
|
|
||||||
});
|
|
||||||
|
|
||||||
// Then: claim set no longer blocks
|
|
||||||
const next = runner.claimNextTasks(1);
|
|
||||||
expect(next).toEqual([]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not affect getNextTask (unclaimed access)', () => {
|
|
||||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
|
||||||
mkdirSync(tasksDir, { recursive: true });
|
|
||||||
writeFileSync(join(tasksDir, 'a-task.md'), 'A');
|
|
||||||
writeFileSync(join(tasksDir, 'b-task.md'), 'B');
|
|
||||||
|
|
||||||
// Given: claim a-task via claimNextTasks
|
|
||||||
runner.claimNextTasks(1);
|
|
||||||
|
|
||||||
// When: getNextTask is called (no claim filtering)
|
|
||||||
const task = runner.getNextTask();
|
|
||||||
|
|
||||||
// Then: getNextTask still returns first task (including claimed)
|
|
||||||
expect(task?.name).toBe('a-task');
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('completeTask', () => {
|
it('should take over stale lock file with invalid pid', () => {
|
||||||
it('should move task to completed directory', () => {
|
mkdirSync(join(testDir, '.takt'), { recursive: true });
|
||||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
writeFileSync(join(testDir, '.takt', 'tasks.yaml.lock'), 'invalid-pid', 'utf-8');
|
||||||
mkdirSync(tasksDir, { recursive: true });
|
|
||||||
const taskFile = join(tasksDir, 'test-task.md');
|
|
||||||
writeFileSync(taskFile, 'Test task content');
|
|
||||||
|
|
||||||
const task = runner.getTask('test-task')!;
|
const task = runner.addTask('Task with stale lock');
|
||||||
const result = {
|
|
||||||
task,
|
|
||||||
success: true,
|
|
||||||
response: 'Task completed successfully',
|
|
||||||
executionLog: ['Started', 'Done'],
|
|
||||||
startedAt: '2024-01-01T00:00:00.000Z',
|
|
||||||
completedAt: '2024-01-01T00:01:00.000Z',
|
|
||||||
};
|
|
||||||
|
|
||||||
const reportFile = runner.completeTask(result);
|
expect(task.name).toContain('task-with-stale-lock');
|
||||||
|
expect(existsSync(join(testDir, '.takt', 'tasks.yaml.lock'))).toBe(false);
|
||||||
// Original task file should be moved
|
|
||||||
expect(existsSync(taskFile)).toBe(false);
|
|
||||||
|
|
||||||
// Report should be created
|
|
||||||
expect(existsSync(reportFile)).toBe(true);
|
|
||||||
const reportContent = readFileSync(reportFile, 'utf-8');
|
|
||||||
expect(reportContent).toContain('# タスク実行レポート');
|
|
||||||
expect(reportContent).toContain('test-task');
|
|
||||||
expect(reportContent).toContain('成功');
|
|
||||||
|
|
||||||
// Log file should be created
|
|
||||||
const logFile = reportFile.replace('report.md', 'log.json');
|
|
||||||
expect(existsSync(logFile)).toBe(true);
|
|
||||||
const logData = JSON.parse(readFileSync(logFile, 'utf-8'));
|
|
||||||
expect(logData.taskName).toBe('test-task');
|
|
||||||
expect(logData.success).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should throw error when called with a failed result', () => {
|
|
||||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
|
||||||
mkdirSync(tasksDir, { recursive: true });
|
|
||||||
writeFileSync(join(tasksDir, 'fail-task.md'), 'Will fail');
|
|
||||||
|
|
||||||
const task = runner.getTask('fail-task')!;
|
|
||||||
const result = {
|
|
||||||
task,
|
|
||||||
success: false,
|
|
||||||
response: 'Error occurred',
|
|
||||||
executionLog: ['Error'],
|
|
||||||
startedAt: '2024-01-01T00:00:00.000Z',
|
|
||||||
completedAt: '2024-01-01T00:01:00.000Z',
|
|
||||||
};
|
|
||||||
|
|
||||||
expect(() => runner.completeTask(result)).toThrow(
|
|
||||||
'Cannot complete a failed task. Use failTask() instead.'
|
|
||||||
);
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('failTask', () => {
|
it('should timeout when lock file is held by a live process', () => {
|
||||||
it('should move task to failed directory', () => {
|
mkdirSync(join(testDir, '.takt'), { recursive: true });
|
||||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
writeFileSync(join(testDir, '.takt', 'tasks.yaml.lock'), String(process.pid), 'utf-8');
|
||||||
mkdirSync(tasksDir, { recursive: true });
|
|
||||||
const taskFile = join(tasksDir, 'fail-task.md');
|
|
||||||
writeFileSync(taskFile, 'Task that will fail');
|
|
||||||
|
|
||||||
const task = runner.getTask('fail-task')!;
|
const dateNowSpy = vi.spyOn(Date, 'now');
|
||||||
const result = {
|
dateNowSpy.mockReturnValueOnce(0);
|
||||||
task,
|
dateNowSpy.mockReturnValue(5_000);
|
||||||
success: false,
|
|
||||||
response: 'Error occurred',
|
|
||||||
executionLog: ['Started', 'Error'],
|
|
||||||
startedAt: '2024-01-01T00:00:00.000Z',
|
|
||||||
completedAt: '2024-01-01T00:01:00.000Z',
|
|
||||||
};
|
|
||||||
|
|
||||||
const reportFile = runner.failTask(result);
|
try {
|
||||||
|
expect(() => runner.listTasks()).toThrow('Failed to acquire tasks lock within 5000ms');
|
||||||
// Original task file should be removed from tasks dir
|
} finally {
|
||||||
expect(existsSync(taskFile)).toBe(false);
|
dateNowSpy.mockRestore();
|
||||||
|
rmSync(join(testDir, '.takt', 'tasks.yaml.lock'), { force: true });
|
||||||
// Report should be in .takt/failed/ (not .takt/completed/)
|
}
|
||||||
expect(reportFile).toContain(join('.takt', 'failed'));
|
|
||||||
expect(reportFile).not.toContain(join('.takt', 'completed'));
|
|
||||||
expect(existsSync(reportFile)).toBe(true);
|
|
||||||
|
|
||||||
const reportContent = readFileSync(reportFile, 'utf-8');
|
|
||||||
expect(reportContent).toContain('# タスク実行レポート');
|
|
||||||
expect(reportContent).toContain('fail-task');
|
|
||||||
expect(reportContent).toContain('失敗');
|
|
||||||
|
|
||||||
// Log file should be created in failed dir
|
|
||||||
const logFile = reportFile.replace('report.md', 'log.json');
|
|
||||||
expect(existsSync(logFile)).toBe(true);
|
|
||||||
const logData = JSON.parse(readFileSync(logFile, 'utf-8'));
|
|
||||||
expect(logData.taskName).toBe('fail-task');
|
|
||||||
expect(logData.success).toBe(false);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not move failed task to completed directory', () => {
|
|
||||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
|
||||||
const completedDir = join(testDir, '.takt', 'completed');
|
|
||||||
mkdirSync(tasksDir, { recursive: true });
|
|
||||||
const taskFile = join(tasksDir, 'another-fail.md');
|
|
||||||
writeFileSync(taskFile, 'Another failing task');
|
|
||||||
|
|
||||||
const task = runner.getTask('another-fail')!;
|
|
||||||
const result = {
|
|
||||||
task,
|
|
||||||
success: false,
|
|
||||||
response: 'Something went wrong',
|
|
||||||
executionLog: [],
|
|
||||||
startedAt: '2024-01-01T00:00:00.000Z',
|
|
||||||
completedAt: '2024-01-01T00:01:00.000Z',
|
|
||||||
};
|
|
||||||
|
|
||||||
runner.failTask(result);
|
|
||||||
|
|
||||||
// completed directory should be empty (only the dir itself exists)
|
|
||||||
mkdirSync(completedDir, { recursive: true });
|
|
||||||
const completedContents = readdirSync(completedDir);
|
|
||||||
expect(completedContents).toHaveLength(0);
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('getTasksDir', () => {
|
it('should recover from corrupted tasks.yaml and allow adding tasks again', () => {
|
||||||
it('should return tasks directory path', () => {
|
mkdirSync(join(testDir, '.takt'), { recursive: true });
|
||||||
expect(runner.getTasksDir()).toBe(join(testDir, '.takt', 'tasks'));
|
writeFileSync(join(testDir, '.takt', 'tasks.yaml'), 'tasks:\n - name: [broken', 'utf-8');
|
||||||
});
|
|
||||||
|
expect(() => runner.listTasks()).not.toThrow();
|
||||||
|
expect(runner.listTasks()).toEqual([]);
|
||||||
|
expect(existsSync(join(testDir, '.takt', 'tasks.yaml'))).toBe(false);
|
||||||
|
|
||||||
|
const task = runner.addTask('Task after recovery');
|
||||||
|
expect(task.name).toContain('task-after-recovery');
|
||||||
|
expect(existsSync(join(testDir, '.takt', 'tasks.yaml'))).toBe(true);
|
||||||
|
expect(runner.listTasks()).toHaveLength(1);
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('requeueFailedTask', () => {
|
it('should load pending content from relative content_file', () => {
|
||||||
it('should copy task file from failed to tasks directory', () => {
|
mkdirSync(join(testDir, 'fixtures'), { recursive: true });
|
||||||
runner.ensureDirs();
|
writeFileSync(join(testDir, 'fixtures', 'task.txt'), 'Task from file\nsecond line', 'utf-8');
|
||||||
|
writeTasksFile(testDir, [createPendingRecord({
|
||||||
|
content: undefined,
|
||||||
|
content_file: 'fixtures/task.txt',
|
||||||
|
})]);
|
||||||
|
|
||||||
// Create a failed task directory
|
const tasks = runner.listTasks();
|
||||||
const failedDir = join(testDir, '.takt', 'failed', '2026-01-31T12-00-00_my-task');
|
const pendingItems = runner.listPendingTaskItems();
|
||||||
mkdirSync(failedDir, { recursive: true });
|
|
||||||
writeFileSync(join(failedDir, 'my-task.yaml'), 'task: Do something\n');
|
|
||||||
writeFileSync(join(failedDir, 'report.md'), '# Report');
|
|
||||||
writeFileSync(join(failedDir, 'log.json'), '{}');
|
|
||||||
|
|
||||||
const result = runner.requeueFailedTask(failedDir);
|
expect(tasks[0]?.content).toBe('Task from file\nsecond line');
|
||||||
|
expect(pendingItems[0]?.content).toBe('Task from file');
|
||||||
|
});
|
||||||
|
|
||||||
// Task file should be copied to tasks dir
|
it('should load pending content from absolute content_file', () => {
|
||||||
expect(existsSync(result)).toBe(true);
|
const contentPath = join(testDir, 'absolute-task.txt');
|
||||||
expect(result).toBe(join(testDir, '.takt', 'tasks', 'my-task.yaml'));
|
writeFileSync(contentPath, 'Absolute task content', 'utf-8');
|
||||||
|
writeTasksFile(testDir, [createPendingRecord({
|
||||||
|
content: undefined,
|
||||||
|
content_file: contentPath,
|
||||||
|
})]);
|
||||||
|
|
||||||
// Original failed directory should still exist
|
const tasks = runner.listTasks();
|
||||||
expect(existsSync(failedDir)).toBe(true);
|
expect(tasks[0]?.content).toBe('Absolute task content');
|
||||||
|
});
|
||||||
|
|
||||||
// Task content should be preserved
|
it('should prefer inline content over content_file', () => {
|
||||||
const content = readFileSync(result, 'utf-8');
|
writeTasksFile(testDir, [createPendingRecord({
|
||||||
expect(content).toBe('task: Do something\n');
|
content: 'Inline content',
|
||||||
|
content_file: 'missing-content-file.txt',
|
||||||
|
})]);
|
||||||
|
|
||||||
|
const tasks = runner.listTasks();
|
||||||
|
expect(tasks[0]?.content).toBe('Inline content');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should throw when content_file target is missing', () => {
|
||||||
|
writeTasksFile(testDir, [createPendingRecord({
|
||||||
|
content: undefined,
|
||||||
|
content_file: 'missing-content-file.txt',
|
||||||
|
})]);
|
||||||
|
|
||||||
|
expect(() => runner.listTasks()).toThrow(/ENOENT|no such file/i);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should mark claimed task as completed', () => {
|
||||||
|
runner.addTask('Task A');
|
||||||
|
const task = runner.claimNextTasks(1)[0]!;
|
||||||
|
|
||||||
|
runner.completeTask({
|
||||||
|
task,
|
||||||
|
success: true,
|
||||||
|
response: 'Done',
|
||||||
|
executionLog: [],
|
||||||
|
startedAt: new Date().toISOString(),
|
||||||
|
completedAt: new Date().toISOString(),
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should add start_movement to YAML task file when specified', () => {
|
const file = loadTasksFile(testDir);
|
||||||
runner.ensureDirs();
|
expect(file.tasks[0]?.status).toBe('completed');
|
||||||
|
});
|
||||||
|
|
||||||
const failedDir = join(testDir, '.takt', 'failed', '2026-01-31T12-00-00_retry-task');
|
it('should mark claimed task as failed with failure detail', () => {
|
||||||
mkdirSync(failedDir, { recursive: true });
|
runner.addTask('Task A');
|
||||||
writeFileSync(join(failedDir, 'retry-task.yaml'), 'task: Retry me\npiece: default\n');
|
const task = runner.claimNextTasks(1)[0]!;
|
||||||
|
|
||||||
const result = runner.requeueFailedTask(failedDir, 'implement');
|
runner.failTask({
|
||||||
|
task,
|
||||||
const content = readFileSync(result, 'utf-8');
|
success: false,
|
||||||
expect(content).toContain('start_movement: implement');
|
response: 'Boom',
|
||||||
expect(content).toContain('task: Retry me');
|
executionLog: ['last message'],
|
||||||
expect(content).toContain('piece: default');
|
failureMovement: 'review',
|
||||||
|
failureLastMessage: 'last message',
|
||||||
|
startedAt: new Date().toISOString(),
|
||||||
|
completedAt: new Date().toISOString(),
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should replace existing start_movement in YAML task file', () => {
|
const failed = runner.listFailedTasks();
|
||||||
runner.ensureDirs();
|
expect(failed).toHaveLength(1);
|
||||||
|
expect(failed[0]?.failure?.error).toBe('Boom');
|
||||||
|
expect(failed[0]?.failure?.movement).toBe('review');
|
||||||
|
expect(failed[0]?.failure?.last_message).toBe('last message');
|
||||||
|
});
|
||||||
|
|
||||||
const failedDir = join(testDir, '.takt', 'failed', '2026-01-31T12-00-00_replace-task');
|
it('should requeue failed task to pending with retry metadata', () => {
|
||||||
mkdirSync(failedDir, { recursive: true });
|
runner.addTask('Task A');
|
||||||
writeFileSync(join(failedDir, 'replace-task.yaml'), 'task: Replace me\nstart_movement: plan\n');
|
const task = runner.claimNextTasks(1)[0]!;
|
||||||
|
runner.failTask({
|
||||||
const result = runner.requeueFailedTask(failedDir, 'ai_review');
|
task,
|
||||||
|
success: false,
|
||||||
const content = readFileSync(result, 'utf-8');
|
response: 'Boom',
|
||||||
expect(content).toContain('start_movement: ai_review');
|
executionLog: [],
|
||||||
expect(content).not.toContain('start_movement: plan');
|
startedAt: new Date().toISOString(),
|
||||||
|
completedAt: new Date().toISOString(),
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should not modify markdown task files even with startMovement', () => {
|
runner.requeueFailedTask(task.name, 'implement', 'retry note');
|
||||||
runner.ensureDirs();
|
|
||||||
|
|
||||||
const failedDir = join(testDir, '.takt', 'failed', '2026-01-31T12-00-00_md-task');
|
const pending = runner.listTasks();
|
||||||
mkdirSync(failedDir, { recursive: true });
|
expect(pending).toHaveLength(1);
|
||||||
writeFileSync(join(failedDir, 'md-task.md'), '# Task\nDo something');
|
expect(pending[0]?.data?.start_movement).toBe('implement');
|
||||||
|
expect(pending[0]?.data?.retry_note).toBe('retry note');
|
||||||
|
});
|
||||||
|
|
||||||
const result = runner.requeueFailedTask(failedDir, 'implement');
|
it('should delete pending and failed tasks', () => {
|
||||||
|
const pending = runner.addTask('Task A');
|
||||||
|
runner.deletePendingTask(pending.name);
|
||||||
|
expect(runner.listTasks()).toHaveLength(0);
|
||||||
|
|
||||||
const content = readFileSync(result, 'utf-8');
|
const failed = runner.addTask('Task B');
|
||||||
// Markdown files should not have start_movement added
|
const running = runner.claimNextTasks(1)[0]!;
|
||||||
expect(content).toBe('# Task\nDo something');
|
runner.failTask({
|
||||||
expect(content).not.toContain('start_movement');
|
task: running,
|
||||||
});
|
success: false,
|
||||||
|
response: 'Boom',
|
||||||
it('should throw error when no task file found', () => {
|
executionLog: [],
|
||||||
runner.ensureDirs();
|
startedAt: new Date().toISOString(),
|
||||||
|
completedAt: new Date().toISOString(),
|
||||||
const failedDir = join(testDir, '.takt', 'failed', '2026-01-31T12-00-00_no-task');
|
|
||||||
mkdirSync(failedDir, { recursive: true });
|
|
||||||
writeFileSync(join(failedDir, 'report.md'), '# Report');
|
|
||||||
|
|
||||||
expect(() => runner.requeueFailedTask(failedDir)).toThrow(
|
|
||||||
/No task file found in failed directory/
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should throw error when failed directory does not exist', () => {
|
|
||||||
runner.ensureDirs();
|
|
||||||
|
|
||||||
expect(() => runner.requeueFailedTask('/nonexistent/path')).toThrow(
|
|
||||||
/Failed to read failed task directory/
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should add retry_note to YAML task file when specified', () => {
|
|
||||||
runner.ensureDirs();
|
|
||||||
|
|
||||||
const failedDir = join(testDir, '.takt', 'failed', '2026-01-31T12-00-00_note-task');
|
|
||||||
mkdirSync(failedDir, { recursive: true });
|
|
||||||
writeFileSync(join(failedDir, 'note-task.yaml'), 'task: Task with note\n');
|
|
||||||
|
|
||||||
const result = runner.requeueFailedTask(failedDir, undefined, 'Fixed the ENOENT error');
|
|
||||||
|
|
||||||
const content = readFileSync(result, 'utf-8');
|
|
||||||
expect(content).toContain('retry_note: "Fixed the ENOENT error"');
|
|
||||||
expect(content).toContain('task: Task with note');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should escape double quotes in retry_note', () => {
|
|
||||||
runner.ensureDirs();
|
|
||||||
|
|
||||||
const failedDir = join(testDir, '.takt', 'failed', '2026-01-31T12-00-00_quote-task');
|
|
||||||
mkdirSync(failedDir, { recursive: true });
|
|
||||||
writeFileSync(join(failedDir, 'quote-task.yaml'), 'task: Task with quotes\n');
|
|
||||||
|
|
||||||
const result = runner.requeueFailedTask(failedDir, undefined, 'Fixed "spawn node ENOENT" error');
|
|
||||||
|
|
||||||
const content = readFileSync(result, 'utf-8');
|
|
||||||
expect(content).toContain('retry_note: "Fixed \\"spawn node ENOENT\\" error"');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should add both start_movement and retry_note when both specified', () => {
|
|
||||||
runner.ensureDirs();
|
|
||||||
|
|
||||||
const failedDir = join(testDir, '.takt', 'failed', '2026-01-31T12-00-00_both-task');
|
|
||||||
mkdirSync(failedDir, { recursive: true });
|
|
||||||
writeFileSync(join(failedDir, 'both-task.yaml'), 'task: Task with both\n');
|
|
||||||
|
|
||||||
const result = runner.requeueFailedTask(failedDir, 'implement', 'Retrying from implement');
|
|
||||||
|
|
||||||
const content = readFileSync(result, 'utf-8');
|
|
||||||
expect(content).toContain('start_movement: implement');
|
|
||||||
expect(content).toContain('retry_note: "Retrying from implement"');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not add retry_note to markdown task files', () => {
|
|
||||||
runner.ensureDirs();
|
|
||||||
|
|
||||||
const failedDir = join(testDir, '.takt', 'failed', '2026-01-31T12-00-00_md-note-task');
|
|
||||||
mkdirSync(failedDir, { recursive: true });
|
|
||||||
writeFileSync(join(failedDir, 'md-note-task.md'), '# Task\nDo something');
|
|
||||||
|
|
||||||
const result = runner.requeueFailedTask(failedDir, undefined, 'Should be ignored');
|
|
||||||
|
|
||||||
const content = readFileSync(result, 'utf-8');
|
|
||||||
expect(content).toBe('# Task\nDo something');
|
|
||||||
expect(content).not.toContain('retry_note');
|
|
||||||
});
|
});
|
||||||
|
runner.deleteFailedTask(failed.name);
|
||||||
|
expect(runner.listFailedTasks()).toHaveLength(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('TaskRecordSchema', () => {
|
||||||
|
it('should reject failed record without failure details', () => {
|
||||||
|
expect(() => TaskRecordSchema.parse({
|
||||||
|
name: 'task-a',
|
||||||
|
status: 'failed',
|
||||||
|
content: 'Do work',
|
||||||
|
created_at: '2026-02-09T00:00:00.000Z',
|
||||||
|
started_at: '2026-02-09T00:01:00.000Z',
|
||||||
|
completed_at: '2026-02-09T00:02:00.000Z',
|
||||||
|
})).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject completed record with failure details', () => {
|
||||||
|
expect(() => TaskRecordSchema.parse({
|
||||||
|
name: 'task-a',
|
||||||
|
status: 'completed',
|
||||||
|
content: 'Do work',
|
||||||
|
created_at: '2026-02-09T00:00:00.000Z',
|
||||||
|
started_at: '2026-02-09T00:01:00.000Z',
|
||||||
|
completed_at: '2026-02-09T00:02:00.000Z',
|
||||||
|
failure: {
|
||||||
|
error: 'unexpected',
|
||||||
|
},
|
||||||
|
})).toThrow();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@ -1,10 +1,7 @@
|
|||||||
/**
|
|
||||||
* Tests for taskDeleteActions — pending/failed task deletion
|
|
||||||
*/
|
|
||||||
|
|
||||||
import * as fs from 'node:fs';
|
import * as fs from 'node:fs';
|
||||||
import * as path from 'node:path';
|
import * as path from 'node:path';
|
||||||
import * as os from 'node:os';
|
import * as os from 'node:os';
|
||||||
|
import { stringify as stringifyYaml } from 'yaml';
|
||||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||||
|
|
||||||
vi.mock('../shared/prompt/index.js', () => ({
|
vi.mock('../shared/prompt/index.js', () => ({
|
||||||
@ -35,6 +32,33 @@ const mockLogError = vi.mocked(logError);
|
|||||||
|
|
||||||
let tmpDir: string;
|
let tmpDir: string;
|
||||||
|
|
||||||
|
function setupTasksFile(projectDir: string): string {
|
||||||
|
const tasksFile = path.join(projectDir, '.takt', 'tasks.yaml');
|
||||||
|
fs.mkdirSync(path.dirname(tasksFile), { recursive: true });
|
||||||
|
fs.writeFileSync(tasksFile, stringifyYaml({
|
||||||
|
tasks: [
|
||||||
|
{
|
||||||
|
name: 'pending-task',
|
||||||
|
status: 'pending',
|
||||||
|
content: 'pending',
|
||||||
|
created_at: '2025-01-15T00:00:00.000Z',
|
||||||
|
started_at: null,
|
||||||
|
completed_at: null,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: 'failed-task',
|
||||||
|
status: 'failed',
|
||||||
|
content: 'failed',
|
||||||
|
created_at: '2025-01-15T00:00:00.000Z',
|
||||||
|
started_at: '2025-01-15T00:01:00.000Z',
|
||||||
|
completed_at: '2025-01-15T00:02:00.000Z',
|
||||||
|
failure: { error: 'boom' },
|
||||||
|
},
|
||||||
|
],
|
||||||
|
}), 'utf-8');
|
||||||
|
return tasksFile;
|
||||||
|
}
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.clearAllMocks();
|
vi.clearAllMocks();
|
||||||
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'takt-test-delete-'));
|
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'takt-test-delete-'));
|
||||||
@ -44,137 +68,59 @@ afterEach(() => {
|
|||||||
fs.rmSync(tmpDir, { recursive: true, force: true });
|
fs.rmSync(tmpDir, { recursive: true, force: true });
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('deletePendingTask', () => {
|
describe('taskDeleteActions', () => {
|
||||||
it('should delete pending task file when confirmed', async () => {
|
it('should delete pending task when confirmed', async () => {
|
||||||
// Given
|
const tasksFile = setupTasksFile(tmpDir);
|
||||||
const filePath = path.join(tmpDir, 'my-task.md');
|
|
||||||
fs.writeFileSync(filePath, 'task content');
|
|
||||||
const task: TaskListItem = {
|
const task: TaskListItem = {
|
||||||
kind: 'pending',
|
kind: 'pending',
|
||||||
name: 'my-task',
|
name: 'pending-task',
|
||||||
createdAt: '2025-01-15',
|
createdAt: '2025-01-15',
|
||||||
filePath,
|
filePath: tasksFile,
|
||||||
content: 'task content',
|
content: 'pending',
|
||||||
};
|
};
|
||||||
mockConfirm.mockResolvedValue(true);
|
mockConfirm.mockResolvedValue(true);
|
||||||
|
|
||||||
// When
|
|
||||||
const result = await deletePendingTask(task);
|
const result = await deletePendingTask(task);
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(result).toBe(true);
|
expect(result).toBe(true);
|
||||||
expect(fs.existsSync(filePath)).toBe(false);
|
const raw = fs.readFileSync(tasksFile, 'utf-8');
|
||||||
expect(mockSuccess).toHaveBeenCalledWith('Deleted pending task: my-task');
|
expect(raw).not.toContain('pending-task');
|
||||||
|
expect(mockSuccess).toHaveBeenCalledWith('Deleted pending task: pending-task');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should not delete when user declines confirmation', async () => {
|
it('should delete failed task when confirmed', async () => {
|
||||||
// Given
|
const tasksFile = setupTasksFile(tmpDir);
|
||||||
const filePath = path.join(tmpDir, 'my-task.md');
|
|
||||||
fs.writeFileSync(filePath, 'task content');
|
|
||||||
const task: TaskListItem = {
|
const task: TaskListItem = {
|
||||||
kind: 'pending',
|
kind: 'failed',
|
||||||
name: 'my-task',
|
name: 'failed-task',
|
||||||
createdAt: '2025-01-15',
|
createdAt: '2025-01-15T12:34:56',
|
||||||
filePath,
|
filePath: tasksFile,
|
||||||
content: 'task content',
|
content: 'failed',
|
||||||
};
|
};
|
||||||
mockConfirm.mockResolvedValue(false);
|
mockConfirm.mockResolvedValue(true);
|
||||||
|
|
||||||
// When
|
const result = await deleteFailedTask(task);
|
||||||
const result = await deletePendingTask(task);
|
|
||||||
|
|
||||||
// Then
|
expect(result).toBe(true);
|
||||||
expect(result).toBe(false);
|
const raw = fs.readFileSync(tasksFile, 'utf-8');
|
||||||
expect(fs.existsSync(filePath)).toBe(true);
|
expect(raw).not.toContain('failed-task');
|
||||||
expect(mockSuccess).not.toHaveBeenCalled();
|
expect(mockSuccess).toHaveBeenCalledWith('Deleted failed task: failed-task');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return false and show error when file does not exist', async () => {
|
it('should return false when target task is missing', async () => {
|
||||||
// Given
|
const tasksFile = setupTasksFile(tmpDir);
|
||||||
const filePath = path.join(tmpDir, 'non-existent.md');
|
|
||||||
const task: TaskListItem = {
|
const task: TaskListItem = {
|
||||||
kind: 'pending',
|
kind: 'failed',
|
||||||
name: 'non-existent',
|
name: 'not-found',
|
||||||
createdAt: '2025-01-15',
|
createdAt: '2025-01-15T12:34:56',
|
||||||
filePath,
|
filePath: tasksFile,
|
||||||
content: '',
|
content: '',
|
||||||
};
|
};
|
||||||
mockConfirm.mockResolvedValue(true);
|
mockConfirm.mockResolvedValue(true);
|
||||||
|
|
||||||
// When
|
const result = await deleteFailedTask(task);
|
||||||
const result = await deletePendingTask(task);
|
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(result).toBe(false);
|
expect(result).toBe(false);
|
||||||
expect(mockLogError).toHaveBeenCalled();
|
expect(mockLogError).toHaveBeenCalled();
|
||||||
expect(mockSuccess).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('deleteFailedTask', () => {
|
|
||||||
it('should delete failed task directory when confirmed', async () => {
|
|
||||||
// Given
|
|
||||||
const dirPath = path.join(tmpDir, '2025-01-15T12-34-56_my-task');
|
|
||||||
fs.mkdirSync(dirPath, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(dirPath, 'my-task.md'), 'content');
|
|
||||||
const task: TaskListItem = {
|
|
||||||
kind: 'failed',
|
|
||||||
name: 'my-task',
|
|
||||||
createdAt: '2025-01-15T12:34:56',
|
|
||||||
filePath: dirPath,
|
|
||||||
content: 'content',
|
|
||||||
};
|
|
||||||
mockConfirm.mockResolvedValue(true);
|
|
||||||
|
|
||||||
// When
|
|
||||||
const result = await deleteFailedTask(task);
|
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(result).toBe(true);
|
|
||||||
expect(fs.existsSync(dirPath)).toBe(false);
|
|
||||||
expect(mockSuccess).toHaveBeenCalledWith('Deleted failed task: my-task');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not delete when user declines confirmation', async () => {
|
|
||||||
// Given
|
|
||||||
const dirPath = path.join(tmpDir, '2025-01-15T12-34-56_my-task');
|
|
||||||
fs.mkdirSync(dirPath, { recursive: true });
|
|
||||||
const task: TaskListItem = {
|
|
||||||
kind: 'failed',
|
|
||||||
name: 'my-task',
|
|
||||||
createdAt: '2025-01-15T12:34:56',
|
|
||||||
filePath: dirPath,
|
|
||||||
content: '',
|
|
||||||
};
|
|
||||||
mockConfirm.mockResolvedValue(false);
|
|
||||||
|
|
||||||
// When
|
|
||||||
const result = await deleteFailedTask(task);
|
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(result).toBe(false);
|
|
||||||
expect(fs.existsSync(dirPath)).toBe(true);
|
|
||||||
expect(mockSuccess).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return false and show error when directory does not exist', async () => {
|
|
||||||
// Given
|
|
||||||
const dirPath = path.join(tmpDir, 'non-existent-dir');
|
|
||||||
const task: TaskListItem = {
|
|
||||||
kind: 'failed',
|
|
||||||
name: 'non-existent',
|
|
||||||
createdAt: '2025-01-15T12:34:56',
|
|
||||||
filePath: dirPath,
|
|
||||||
content: '',
|
|
||||||
};
|
|
||||||
mockConfirm.mockResolvedValue(true);
|
|
||||||
|
|
||||||
// When
|
|
||||||
const result = await deleteFailedTask(task);
|
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(result).toBe(false);
|
|
||||||
expect(mockLogError).toHaveBeenCalled();
|
|
||||||
expect(mockSuccess).not.toHaveBeenCalled();
|
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@ -95,6 +95,9 @@ describe('resolveTaskExecution', () => {
|
|||||||
name: 'simple-task',
|
name: 'simple-task',
|
||||||
content: 'Simple task content',
|
content: 'Simple task content',
|
||||||
filePath: '/tasks/simple-task.yaml',
|
filePath: '/tasks/simple-task.yaml',
|
||||||
|
createdAt: '2026-02-09T00:00:00.000Z',
|
||||||
|
status: 'pending',
|
||||||
|
data: null,
|
||||||
};
|
};
|
||||||
|
|
||||||
// When
|
// When
|
||||||
@ -488,4 +491,24 @@ describe('resolveTaskExecution', () => {
|
|||||||
// Then
|
// Then
|
||||||
expect(result.issueNumber).toBeUndefined();
|
expect(result.issueNumber).toBeUndefined();
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should not start clone creation when abortSignal is already aborted', async () => {
|
||||||
|
// Given: Worktree task with pre-aborted signal
|
||||||
|
const task: TaskInfo = {
|
||||||
|
name: 'aborted-before-clone',
|
||||||
|
content: 'Task content',
|
||||||
|
filePath: '/tasks/task.yaml',
|
||||||
|
data: {
|
||||||
|
task: 'Task content',
|
||||||
|
worktree: true,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
const controller = new AbortController();
|
||||||
|
controller.abort();
|
||||||
|
|
||||||
|
// When / Then
|
||||||
|
await expect(resolveTaskExecution(task, '/project', 'default', controller.signal)).rejects.toThrow('Task execution aborted');
|
||||||
|
expect(mockSummarizeTaskName).not.toHaveBeenCalled();
|
||||||
|
expect(mockCreateSharedClone).not.toHaveBeenCalled();
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@ -1,10 +1,7 @@
|
|||||||
/**
|
|
||||||
* Tests for taskRetryActions — failed task retry functionality
|
|
||||||
*/
|
|
||||||
|
|
||||||
import * as fs from 'node:fs';
|
import * as fs from 'node:fs';
|
||||||
import * as path from 'node:path';
|
import * as path from 'node:path';
|
||||||
import * as os from 'node:os';
|
import * as os from 'node:os';
|
||||||
|
import { stringify as stringifyYaml } from 'yaml';
|
||||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||||
|
|
||||||
vi.mock('../shared/prompt/index.js', () => ({
|
vi.mock('../shared/prompt/index.js', () => ({
|
||||||
@ -29,10 +26,6 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
|||||||
}),
|
}),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../infra/fs/session.js', () => ({
|
|
||||||
extractFailureInfo: vi.fn(),
|
|
||||||
}));
|
|
||||||
|
|
||||||
vi.mock('../infra/config/index.js', () => ({
|
vi.mock('../infra/config/index.js', () => ({
|
||||||
loadGlobalConfig: vi.fn(),
|
loadGlobalConfig: vi.fn(),
|
||||||
loadPieceByIdentifier: vi.fn(),
|
loadPieceByIdentifier: vi.fn(),
|
||||||
@ -66,16 +59,37 @@ const defaultPieceConfig: PieceConfig = {
|
|||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
const customPieceConfig: PieceConfig = {
|
function writeFailedTask(projectDir: string, name: string): TaskListItem {
|
||||||
name: 'custom',
|
const tasksFile = path.join(projectDir, '.takt', 'tasks.yaml');
|
||||||
description: 'Custom piece',
|
fs.mkdirSync(path.dirname(tasksFile), { recursive: true });
|
||||||
initialMovement: 'step1',
|
fs.writeFileSync(tasksFile, stringifyYaml({
|
||||||
maxIterations: 10,
|
tasks: [
|
||||||
movements: [
|
{
|
||||||
{ name: 'step1', persona: 'coder', instruction: '' },
|
name,
|
||||||
{ name: 'step2', persona: 'reviewer', instruction: '' },
|
status: 'failed',
|
||||||
],
|
content: 'Do something',
|
||||||
};
|
created_at: '2025-01-15T12:00:00.000Z',
|
||||||
|
started_at: '2025-01-15T12:01:00.000Z',
|
||||||
|
completed_at: '2025-01-15T12:02:00.000Z',
|
||||||
|
piece: 'default',
|
||||||
|
failure: {
|
||||||
|
movement: 'review',
|
||||||
|
error: 'Boom',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
}), 'utf-8');
|
||||||
|
|
||||||
|
return {
|
||||||
|
kind: 'failed',
|
||||||
|
name,
|
||||||
|
createdAt: '2025-01-15T12:02:00.000Z',
|
||||||
|
filePath: tasksFile,
|
||||||
|
content: 'Do something',
|
||||||
|
data: { task: 'Do something', piece: 'default' },
|
||||||
|
failure: { movement: 'review', error: 'Boom' },
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.clearAllMocks();
|
vi.clearAllMocks();
|
||||||
@ -88,264 +102,49 @@ afterEach(() => {
|
|||||||
|
|
||||||
describe('retryFailedTask', () => {
|
describe('retryFailedTask', () => {
|
||||||
it('should requeue task with selected movement', async () => {
|
it('should requeue task with selected movement', async () => {
|
||||||
// Given: a failed task directory with a task file
|
const task = writeFailedTask(tmpDir, 'my-task');
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed', '2025-01-15T12-34-56_my-task');
|
|
||||||
const tasksDir = path.join(tmpDir, '.takt', 'tasks');
|
|
||||||
fs.mkdirSync(failedDir, { recursive: true });
|
|
||||||
fs.mkdirSync(tasksDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(failedDir, 'my-task.yaml'), 'task: Do something\n');
|
|
||||||
|
|
||||||
const task: TaskListItem = {
|
|
||||||
kind: 'failed',
|
|
||||||
name: 'my-task',
|
|
||||||
createdAt: '2025-01-15T12:34:56',
|
|
||||||
filePath: failedDir,
|
|
||||||
content: 'Do something',
|
|
||||||
};
|
|
||||||
|
|
||||||
mockLoadGlobalConfig.mockReturnValue({ defaultPiece: 'default' });
|
mockLoadGlobalConfig.mockReturnValue({ defaultPiece: 'default' });
|
||||||
mockLoadPieceByIdentifier.mockReturnValue(defaultPieceConfig);
|
mockLoadPieceByIdentifier.mockReturnValue(defaultPieceConfig);
|
||||||
mockSelectOption.mockResolvedValue('implement');
|
mockSelectOption.mockResolvedValue('implement');
|
||||||
mockPromptInput.mockResolvedValue(''); // Empty retry note
|
mockPromptInput.mockResolvedValue('');
|
||||||
|
|
||||||
// When
|
|
||||||
const result = await retryFailedTask(task, tmpDir);
|
const result = await retryFailedTask(task, tmpDir);
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(result).toBe(true);
|
expect(result).toBe(true);
|
||||||
expect(mockSuccess).toHaveBeenCalledWith('Task requeued: my-task');
|
expect(mockSuccess).toHaveBeenCalledWith('Task requeued: my-task');
|
||||||
|
|
||||||
// Verify requeued file
|
const tasksYaml = fs.readFileSync(path.join(tmpDir, '.takt', 'tasks.yaml'), 'utf-8');
|
||||||
const requeuedFile = path.join(tasksDir, 'my-task.yaml');
|
expect(tasksYaml).toContain('status: pending');
|
||||||
expect(fs.existsSync(requeuedFile)).toBe(true);
|
expect(tasksYaml).toContain('start_movement: implement');
|
||||||
const content = fs.readFileSync(requeuedFile, 'utf-8');
|
|
||||||
expect(content).toContain('start_movement: implement');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should use piece field from task file instead of defaultPiece', async () => {
|
|
||||||
// Given: a failed task with piece: custom in YAML
|
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed', '2025-01-15T12-34-56_custom-task');
|
|
||||||
const tasksDir = path.join(tmpDir, '.takt', 'tasks');
|
|
||||||
fs.mkdirSync(failedDir, { recursive: true });
|
|
||||||
fs.mkdirSync(tasksDir, { recursive: true });
|
|
||||||
fs.writeFileSync(
|
|
||||||
path.join(failedDir, 'custom-task.yaml'),
|
|
||||||
'task: Do something\npiece: custom\n',
|
|
||||||
);
|
|
||||||
|
|
||||||
const task: TaskListItem = {
|
|
||||||
kind: 'failed',
|
|
||||||
name: 'custom-task',
|
|
||||||
createdAt: '2025-01-15T12:34:56',
|
|
||||||
filePath: failedDir,
|
|
||||||
content: 'Do something',
|
|
||||||
};
|
|
||||||
|
|
||||||
mockLoadGlobalConfig.mockReturnValue({ defaultPiece: 'default' });
|
|
||||||
// Should be called with 'custom', not 'default'
|
|
||||||
mockLoadPieceByIdentifier.mockImplementation((name: string) => {
|
|
||||||
if (name === 'custom') return customPieceConfig;
|
|
||||||
if (name === 'default') return defaultPieceConfig;
|
|
||||||
return null;
|
|
||||||
});
|
|
||||||
mockSelectOption.mockResolvedValue('step2');
|
|
||||||
mockPromptInput.mockResolvedValue('');
|
|
||||||
|
|
||||||
// When
|
|
||||||
const result = await retryFailedTask(task, tmpDir);
|
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(result).toBe(true);
|
|
||||||
expect(mockLoadPieceByIdentifier).toHaveBeenCalledWith('custom', tmpDir);
|
|
||||||
expect(mockSuccess).toHaveBeenCalledWith('Task requeued: custom-task');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return false when user cancels movement selection', async () => {
|
|
||||||
// Given
|
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed', '2025-01-15T12-34-56_my-task');
|
|
||||||
fs.mkdirSync(failedDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(failedDir, 'my-task.yaml'), 'task: Do something\n');
|
|
||||||
|
|
||||||
const task: TaskListItem = {
|
|
||||||
kind: 'failed',
|
|
||||||
name: 'my-task',
|
|
||||||
createdAt: '2025-01-15T12:34:56',
|
|
||||||
filePath: failedDir,
|
|
||||||
content: 'Do something',
|
|
||||||
};
|
|
||||||
|
|
||||||
mockLoadGlobalConfig.mockReturnValue({ defaultPiece: 'default' });
|
|
||||||
mockLoadPieceByIdentifier.mockReturnValue(defaultPieceConfig);
|
|
||||||
mockSelectOption.mockResolvedValue(null); // User cancelled
|
|
||||||
// No need to mock promptInput since user cancelled before reaching it
|
|
||||||
|
|
||||||
// When
|
|
||||||
const result = await retryFailedTask(task, tmpDir);
|
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(result).toBe(false);
|
|
||||||
expect(mockSuccess).not.toHaveBeenCalled();
|
|
||||||
expect(mockPromptInput).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return false and show error when piece not found', async () => {
|
|
||||||
// Given
|
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed', '2025-01-15T12-34-56_my-task');
|
|
||||||
fs.mkdirSync(failedDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(failedDir, 'my-task.yaml'), 'task: Do something\n');
|
|
||||||
|
|
||||||
const task: TaskListItem = {
|
|
||||||
kind: 'failed',
|
|
||||||
name: 'my-task',
|
|
||||||
createdAt: '2025-01-15T12:34:56',
|
|
||||||
filePath: failedDir,
|
|
||||||
content: 'Do something',
|
|
||||||
};
|
|
||||||
|
|
||||||
mockLoadGlobalConfig.mockReturnValue({ defaultPiece: 'nonexistent' });
|
|
||||||
mockLoadPieceByIdentifier.mockReturnValue(null);
|
|
||||||
|
|
||||||
// When
|
|
||||||
const result = await retryFailedTask(task, tmpDir);
|
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(result).toBe(false);
|
|
||||||
expect(mockLogError).toHaveBeenCalledWith(
|
|
||||||
'Piece "nonexistent" not found. Cannot determine available movements.',
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should fallback to defaultPiece when task file has no piece field', async () => {
|
|
||||||
// Given: a failed task without piece field in YAML
|
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed', '2025-01-15T12-34-56_plain-task');
|
|
||||||
const tasksDir = path.join(tmpDir, '.takt', 'tasks');
|
|
||||||
fs.mkdirSync(failedDir, { recursive: true });
|
|
||||||
fs.mkdirSync(tasksDir, { recursive: true });
|
|
||||||
fs.writeFileSync(
|
|
||||||
path.join(failedDir, 'plain-task.yaml'),
|
|
||||||
'task: Do something without piece\n',
|
|
||||||
);
|
|
||||||
|
|
||||||
const task: TaskListItem = {
|
|
||||||
kind: 'failed',
|
|
||||||
name: 'plain-task',
|
|
||||||
createdAt: '2025-01-15T12:34:56',
|
|
||||||
filePath: failedDir,
|
|
||||||
content: 'Do something without piece',
|
|
||||||
};
|
|
||||||
|
|
||||||
mockLoadGlobalConfig.mockReturnValue({ defaultPiece: 'default' });
|
|
||||||
mockLoadPieceByIdentifier.mockImplementation((name: string) => {
|
|
||||||
if (name === 'default') return defaultPieceConfig;
|
|
||||||
return null;
|
|
||||||
});
|
|
||||||
mockSelectOption.mockResolvedValue('plan');
|
|
||||||
mockPromptInput.mockResolvedValue('');
|
|
||||||
|
|
||||||
// When
|
|
||||||
const result = await retryFailedTask(task, tmpDir);
|
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(result).toBe(true);
|
|
||||||
expect(mockLoadPieceByIdentifier).toHaveBeenCalledWith('default', tmpDir);
|
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should not add start_movement when initial movement is selected', async () => {
|
it('should not add start_movement when initial movement is selected', async () => {
|
||||||
// Given
|
const task = writeFailedTask(tmpDir, 'my-task');
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed', '2025-01-15T12-34-56_my-task');
|
|
||||||
const tasksDir = path.join(tmpDir, '.takt', 'tasks');
|
|
||||||
fs.mkdirSync(failedDir, { recursive: true });
|
|
||||||
fs.mkdirSync(tasksDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(failedDir, 'my-task.yaml'), 'task: Do something\n');
|
|
||||||
|
|
||||||
const task: TaskListItem = {
|
|
||||||
kind: 'failed',
|
|
||||||
name: 'my-task',
|
|
||||||
createdAt: '2025-01-15T12:34:56',
|
|
||||||
filePath: failedDir,
|
|
||||||
content: 'Do something',
|
|
||||||
};
|
|
||||||
|
|
||||||
mockLoadGlobalConfig.mockReturnValue({ defaultPiece: 'default' });
|
mockLoadGlobalConfig.mockReturnValue({ defaultPiece: 'default' });
|
||||||
mockLoadPieceByIdentifier.mockReturnValue(defaultPieceConfig);
|
mockLoadPieceByIdentifier.mockReturnValue(defaultPieceConfig);
|
||||||
mockSelectOption.mockResolvedValue('plan'); // Initial movement
|
mockSelectOption.mockResolvedValue('plan');
|
||||||
mockPromptInput.mockResolvedValue(''); // Empty retry note
|
mockPromptInput.mockResolvedValue('');
|
||||||
|
|
||||||
// When
|
|
||||||
const result = await retryFailedTask(task, tmpDir);
|
const result = await retryFailedTask(task, tmpDir);
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(result).toBe(true);
|
expect(result).toBe(true);
|
||||||
|
const tasksYaml = fs.readFileSync(path.join(tmpDir, '.takt', 'tasks.yaml'), 'utf-8');
|
||||||
// Verify requeued file does not have start_movement
|
expect(tasksYaml).not.toContain('start_movement');
|
||||||
const requeuedFile = path.join(tasksDir, 'my-task.yaml');
|
|
||||||
const content = fs.readFileSync(requeuedFile, 'utf-8');
|
|
||||||
expect(content).not.toContain('start_movement');
|
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should add retry_note when user provides one', async () => {
|
it('should return false and show error when piece not found', async () => {
|
||||||
// Given
|
const task = writeFailedTask(tmpDir, 'my-task');
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed', '2025-01-15T12-34-56_retry-note-task');
|
|
||||||
const tasksDir = path.join(tmpDir, '.takt', 'tasks');
|
|
||||||
fs.mkdirSync(failedDir, { recursive: true });
|
|
||||||
fs.mkdirSync(tasksDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(failedDir, 'retry-note-task.yaml'), 'task: Do something\n');
|
|
||||||
|
|
||||||
const task: TaskListItem = {
|
|
||||||
kind: 'failed',
|
|
||||||
name: 'retry-note-task',
|
|
||||||
createdAt: '2025-01-15T12:34:56',
|
|
||||||
filePath: failedDir,
|
|
||||||
content: 'Do something',
|
|
||||||
};
|
|
||||||
|
|
||||||
mockLoadGlobalConfig.mockReturnValue({ defaultPiece: 'default' });
|
mockLoadGlobalConfig.mockReturnValue({ defaultPiece: 'default' });
|
||||||
mockLoadPieceByIdentifier.mockReturnValue(defaultPieceConfig);
|
mockLoadPieceByIdentifier.mockReturnValue(null);
|
||||||
mockSelectOption.mockResolvedValue('implement');
|
|
||||||
mockPromptInput.mockResolvedValue('Fixed spawn node ENOENT error');
|
|
||||||
|
|
||||||
// When
|
|
||||||
const result = await retryFailedTask(task, tmpDir);
|
const result = await retryFailedTask(task, tmpDir);
|
||||||
|
|
||||||
// Then
|
expect(result).toBe(false);
|
||||||
expect(result).toBe(true);
|
expect(mockLogError).toHaveBeenCalledWith(
|
||||||
|
'Piece "default" not found. Cannot determine available movements.',
|
||||||
const requeuedFile = path.join(tasksDir, 'retry-note-task.yaml');
|
);
|
||||||
const content = fs.readFileSync(requeuedFile, 'utf-8');
|
|
||||||
expect(content).toContain('start_movement: implement');
|
|
||||||
expect(content).toContain('retry_note: "Fixed spawn node ENOENT error"');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not add retry_note when user skips it', async () => {
|
|
||||||
// Given
|
|
||||||
const failedDir = path.join(tmpDir, '.takt', 'failed', '2025-01-15T12-34-56_no-note-task');
|
|
||||||
const tasksDir = path.join(tmpDir, '.takt', 'tasks');
|
|
||||||
fs.mkdirSync(failedDir, { recursive: true });
|
|
||||||
fs.mkdirSync(tasksDir, { recursive: true });
|
|
||||||
fs.writeFileSync(path.join(failedDir, 'no-note-task.yaml'), 'task: Do something\n');
|
|
||||||
|
|
||||||
const task: TaskListItem = {
|
|
||||||
kind: 'failed',
|
|
||||||
name: 'no-note-task',
|
|
||||||
createdAt: '2025-01-15T12:34:56',
|
|
||||||
filePath: failedDir,
|
|
||||||
content: 'Do something',
|
|
||||||
};
|
|
||||||
|
|
||||||
mockLoadGlobalConfig.mockReturnValue({ defaultPiece: 'default' });
|
|
||||||
mockLoadPieceByIdentifier.mockReturnValue(defaultPieceConfig);
|
|
||||||
mockSelectOption.mockResolvedValue('implement');
|
|
||||||
mockPromptInput.mockResolvedValue(''); // Empty string - user skipped
|
|
||||||
|
|
||||||
// When
|
|
||||||
const result = await retryFailedTask(task, tmpDir);
|
|
||||||
|
|
||||||
// Then
|
|
||||||
expect(result).toBe(true);
|
|
||||||
|
|
||||||
const requeuedFile = path.join(tasksDir, 'no-note-task.yaml');
|
|
||||||
const content = fs.readFileSync(requeuedFile, 'utf-8');
|
|
||||||
expect(content).toContain('start_movement: implement');
|
|
||||||
expect(content).not.toContain('retry_note');
|
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
136
src/__tests__/text.test.ts
Normal file
136
src/__tests__/text.test.ts
Normal file
@ -0,0 +1,136 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for text display width utilities
|
||||||
|
*
|
||||||
|
* Tests full-width character detection, display width calculation,
|
||||||
|
* ANSI stripping, and text truncation.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import {
|
||||||
|
isFullWidth,
|
||||||
|
getDisplayWidth,
|
||||||
|
stripAnsi,
|
||||||
|
truncateText,
|
||||||
|
} from '../shared/utils/text.js';
|
||||||
|
|
||||||
|
describe('isFullWidth', () => {
|
||||||
|
it('should return false for ASCII characters', () => {
|
||||||
|
expect(isFullWidth('A'.codePointAt(0)!)).toBe(false);
|
||||||
|
expect(isFullWidth('z'.codePointAt(0)!)).toBe(false);
|
||||||
|
expect(isFullWidth('0'.codePointAt(0)!)).toBe(false);
|
||||||
|
expect(isFullWidth(' '.codePointAt(0)!)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true for CJK ideographs', () => {
|
||||||
|
expect(isFullWidth('漢'.codePointAt(0)!)).toBe(true);
|
||||||
|
expect(isFullWidth('字'.codePointAt(0)!)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true for Hangul syllables', () => {
|
||||||
|
expect(isFullWidth('한'.codePointAt(0)!)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true for fullwidth ASCII variants', () => {
|
||||||
|
expect(isFullWidth('A'.codePointAt(0)!)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true for Hangul Jamo', () => {
|
||||||
|
// U+1100 (ᄀ) is in Hangul Jamo range
|
||||||
|
expect(isFullWidth(0x1100)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true for CJK radicals', () => {
|
||||||
|
// U+2E80 is in CJK radicals range
|
||||||
|
expect(isFullWidth(0x2E80)).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('getDisplayWidth', () => {
|
||||||
|
it('should return 0 for empty string', () => {
|
||||||
|
expect(getDisplayWidth('')).toBe(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should count ASCII characters as width 1', () => {
|
||||||
|
expect(getDisplayWidth('hello')).toBe(5);
|
||||||
|
expect(getDisplayWidth('abc123')).toBe(6);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should count CJK characters as width 2', () => {
|
||||||
|
expect(getDisplayWidth('漢字')).toBe(4);
|
||||||
|
expect(getDisplayWidth('テスト')).toBe(6);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle mixed ASCII and CJK', () => {
|
||||||
|
expect(getDisplayWidth('hello漢字')).toBe(9); // 5 + 4
|
||||||
|
expect(getDisplayWidth('AB漢C')).toBe(5); // 1+1+2+1
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('stripAnsi', () => {
|
||||||
|
it('should strip CSI color codes', () => {
|
||||||
|
expect(stripAnsi('\x1b[31mred text\x1b[0m')).toBe('red text');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should strip multiple CSI sequences', () => {
|
||||||
|
expect(stripAnsi('\x1b[1m\x1b[32mbold green\x1b[0m')).toBe('bold green');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should strip cursor movement sequences', () => {
|
||||||
|
expect(stripAnsi('\x1b[2Amove up')).toBe('move up');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should strip OSC sequences (BEL terminated)', () => {
|
||||||
|
expect(stripAnsi('\x1b]0;title\x07rest')).toBe('rest');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should strip OSC sequences (ST terminated)', () => {
|
||||||
|
expect(stripAnsi('\x1b]0;title\x1b\\rest')).toBe('rest');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return unchanged string with no escapes', () => {
|
||||||
|
expect(stripAnsi('plain text')).toBe('plain text');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle empty string', () => {
|
||||||
|
expect(stripAnsi('')).toBe('');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('truncateText', () => {
|
||||||
|
it('should return empty string for maxWidth 0', () => {
|
||||||
|
expect(truncateText('hello', 0)).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not truncate text shorter than maxWidth', () => {
|
||||||
|
expect(truncateText('hello', 10)).toBe('hello');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should truncate and add ellipsis for long text', () => {
|
||||||
|
const result = truncateText('hello world', 6);
|
||||||
|
expect(result).toBe('hello…');
|
||||||
|
expect(getDisplayWidth(result)).toBeLessThanOrEqual(6);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle CJK characters correctly when truncating', () => {
|
||||||
|
// Each CJK char is width 2, so "漢字テスト" = 10 width
|
||||||
|
const result = truncateText('漢字テスト', 5);
|
||||||
|
// Should fit within 5 columns including ellipsis
|
||||||
|
expect(getDisplayWidth(result)).toBeLessThanOrEqual(5);
|
||||||
|
expect(result.endsWith('…')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle mixed content', () => {
|
||||||
|
const result = truncateText('AB漢字CD', 5);
|
||||||
|
expect(getDisplayWidth(result)).toBeLessThanOrEqual(5);
|
||||||
|
expect(result.endsWith('…')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should truncate text at exact maxWidth since ellipsis space is reserved', () => {
|
||||||
|
// truncateText always reserves 1 column for ellipsis
|
||||||
|
expect(truncateText('abcde', 5)).toBe('abcd…');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return text as-is when shorter than maxWidth', () => {
|
||||||
|
expect(truncateText('abcd', 5)).toBe('abcd');
|
||||||
|
});
|
||||||
|
});
|
||||||
@ -4,6 +4,7 @@
|
|||||||
|
|
||||||
import { describe, it, expect } from 'vitest';
|
import { describe, it, expect } from 'vitest';
|
||||||
import { determineNextMovementByRules } from '../core/piece/index.js';
|
import { determineNextMovementByRules } from '../core/piece/index.js';
|
||||||
|
import { extractBlockedPrompt } from '../core/piece/engine/transitions.js';
|
||||||
import type { PieceMovement } from '../core/models/index.js';
|
import type { PieceMovement } from '../core/models/index.js';
|
||||||
|
|
||||||
function createMovementWithRules(rules: { condition: string; next: string }[]): PieceMovement {
|
function createMovementWithRules(rules: { condition: string; next: string }[]): PieceMovement {
|
||||||
@ -79,3 +80,40 @@ describe('determineNextMovementByRules', () => {
|
|||||||
expect(determineNextMovementByRules(step, 1)).toBeNull();
|
expect(determineNextMovementByRules(step, 1)).toBeNull();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
|
describe('extractBlockedPrompt', () => {
|
||||||
|
it('should extract prompt after "必要な情報:" pattern', () => {
|
||||||
|
const content = '処理がブロックされました。\n必要な情報: デプロイ先の環境を教えてください';
|
||||||
|
expect(extractBlockedPrompt(content)).toBe('デプロイ先の環境を教えてください');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should extract prompt after "質問:" pattern', () => {
|
||||||
|
const content = '質問: どのブランチにマージしますか?';
|
||||||
|
expect(extractBlockedPrompt(content)).toBe('どのブランチにマージしますか?');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should extract prompt after "理由:" pattern', () => {
|
||||||
|
const content = '理由: 権限が不足しています';
|
||||||
|
expect(extractBlockedPrompt(content)).toBe('権限が不足しています');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should extract prompt after "確認:" pattern', () => {
|
||||||
|
const content = '確認: この変更を続けてもよいですか?';
|
||||||
|
expect(extractBlockedPrompt(content)).toBe('この変更を続けてもよいですか?');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should support full-width colon', () => {
|
||||||
|
const content = '必要な情報:ファイルパスを指定してください';
|
||||||
|
expect(extractBlockedPrompt(content)).toBe('ファイルパスを指定してください');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return full content when no pattern matches', () => {
|
||||||
|
const content = 'Something went wrong and I need help';
|
||||||
|
expect(extractBlockedPrompt(content)).toBe('Something went wrong and I need help');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return first matching pattern when multiple exist', () => {
|
||||||
|
const content = '質問: 最初の質問\n確認: 二番目の質問';
|
||||||
|
expect(extractBlockedPrompt(content)).toBe('最初の質問');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|||||||
89
src/__tests__/watchTasks.test.ts
Normal file
89
src/__tests__/watchTasks.test.ts
Normal file
@ -0,0 +1,89 @@
|
|||||||
|
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||||
|
import type { TaskInfo } from '../infra/task/index.js';
|
||||||
|
|
||||||
|
const {
|
||||||
|
mockRecoverInterruptedRunningTasks,
|
||||||
|
mockGetTasksDir,
|
||||||
|
mockWatch,
|
||||||
|
mockStop,
|
||||||
|
mockExecuteAndCompleteTask,
|
||||||
|
mockInfo,
|
||||||
|
mockHeader,
|
||||||
|
mockBlankLine,
|
||||||
|
mockStatus,
|
||||||
|
mockSuccess,
|
||||||
|
mockGetCurrentPiece,
|
||||||
|
} = vi.hoisted(() => ({
|
||||||
|
mockRecoverInterruptedRunningTasks: vi.fn(),
|
||||||
|
mockGetTasksDir: vi.fn(),
|
||||||
|
mockWatch: vi.fn(),
|
||||||
|
mockStop: vi.fn(),
|
||||||
|
mockExecuteAndCompleteTask: vi.fn(),
|
||||||
|
mockInfo: vi.fn(),
|
||||||
|
mockHeader: vi.fn(),
|
||||||
|
mockBlankLine: vi.fn(),
|
||||||
|
mockStatus: vi.fn(),
|
||||||
|
mockSuccess: vi.fn(),
|
||||||
|
mockGetCurrentPiece: vi.fn(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../infra/task/index.js', () => ({
|
||||||
|
TaskRunner: vi.fn().mockImplementation(() => ({
|
||||||
|
recoverInterruptedRunningTasks: mockRecoverInterruptedRunningTasks,
|
||||||
|
getTasksDir: mockGetTasksDir,
|
||||||
|
})),
|
||||||
|
TaskWatcher: vi.fn().mockImplementation(() => ({
|
||||||
|
watch: mockWatch,
|
||||||
|
stop: mockStop,
|
||||||
|
})),
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../features/tasks/execute/taskExecution.js', () => ({
|
||||||
|
executeAndCompleteTask: mockExecuteAndCompleteTask,
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../shared/ui/index.js', () => ({
|
||||||
|
header: mockHeader,
|
||||||
|
info: mockInfo,
|
||||||
|
success: mockSuccess,
|
||||||
|
status: mockStatus,
|
||||||
|
blankLine: mockBlankLine,
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('../infra/config/index.js', () => ({
|
||||||
|
getCurrentPiece: mockGetCurrentPiece,
|
||||||
|
}));
|
||||||
|
|
||||||
|
import { watchTasks } from '../features/tasks/watch/index.js';
|
||||||
|
|
||||||
|
describe('watchTasks', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.clearAllMocks();
|
||||||
|
mockGetCurrentPiece.mockReturnValue('default');
|
||||||
|
mockRecoverInterruptedRunningTasks.mockReturnValue(0);
|
||||||
|
mockGetTasksDir.mockReturnValue('/project/.takt/tasks.yaml');
|
||||||
|
mockExecuteAndCompleteTask.mockResolvedValue(true);
|
||||||
|
|
||||||
|
mockWatch.mockImplementation(async (onTask: (task: TaskInfo) => Promise<void>) => {
|
||||||
|
await onTask({
|
||||||
|
name: 'task-1',
|
||||||
|
content: 'Task 1',
|
||||||
|
filePath: '/project/.takt/tasks.yaml',
|
||||||
|
createdAt: '2026-02-09T00:00:00.000Z',
|
||||||
|
status: 'running',
|
||||||
|
data: null,
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('watch開始時に中断されたrunningタスクをpendingへ復旧する', async () => {
|
||||||
|
mockRecoverInterruptedRunningTasks.mockReturnValue(1);
|
||||||
|
|
||||||
|
await watchTasks('/project');
|
||||||
|
|
||||||
|
expect(mockRecoverInterruptedRunningTasks).toHaveBeenCalledTimes(1);
|
||||||
|
expect(mockInfo).toHaveBeenCalledWith('Recovered 1 interrupted running task(s) to pending.');
|
||||||
|
expect(mockWatch).toHaveBeenCalledTimes(1);
|
||||||
|
expect(mockExecuteAndCompleteTask).toHaveBeenCalledTimes(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
@ -1,24 +1,26 @@
|
|||||||
/**
|
|
||||||
* TaskWatcher tests
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
import { mkdirSync, writeFileSync, existsSync, rmSync } from 'node:fs';
|
import { mkdirSync, writeFileSync, existsSync, rmSync } from 'node:fs';
|
||||||
import { join } from 'node:path';
|
import { join } from 'node:path';
|
||||||
|
import { stringify as stringifyYaml } from 'yaml';
|
||||||
import { TaskWatcher } from '../infra/task/watcher.js';
|
import { TaskWatcher } from '../infra/task/watcher.js';
|
||||||
|
import { TaskRunner } from '../infra/task/runner.js';
|
||||||
import type { TaskInfo } from '../infra/task/types.js';
|
import type { TaskInfo } from '../infra/task/types.js';
|
||||||
|
|
||||||
describe('TaskWatcher', () => {
|
describe('TaskWatcher', () => {
|
||||||
const testDir = `/tmp/takt-watcher-test-${Date.now()}`;
|
const testDir = `/tmp/takt-watcher-test-${Date.now()}`;
|
||||||
let watcher: TaskWatcher | null = null;
|
let watcher: TaskWatcher | null = null;
|
||||||
|
|
||||||
|
function writeTasksYaml(tasks: Array<Record<string, unknown>>): void {
|
||||||
|
const tasksFile = join(testDir, '.takt', 'tasks.yaml');
|
||||||
|
mkdirSync(join(testDir, '.takt'), { recursive: true });
|
||||||
|
writeFileSync(tasksFile, stringifyYaml({ tasks }), 'utf-8');
|
||||||
|
}
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
mkdirSync(join(testDir, '.takt', 'tasks'), { recursive: true });
|
mkdirSync(join(testDir, '.takt'), { recursive: true });
|
||||||
mkdirSync(join(testDir, '.takt', 'completed'), { recursive: true });
|
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
// Ensure watcher is stopped before cleanup
|
|
||||||
if (watcher) {
|
if (watcher) {
|
||||||
watcher.stop();
|
watcher.stop();
|
||||||
watcher = null;
|
watcher = null;
|
||||||
@ -41,21 +43,24 @@ describe('TaskWatcher', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
describe('watch', () => {
|
describe('watch', () => {
|
||||||
it('should detect and process a task file', async () => {
|
it('should detect and process a pending task from tasks.yaml', async () => {
|
||||||
|
writeTasksYaml([
|
||||||
|
{
|
||||||
|
name: 'test-task',
|
||||||
|
status: 'pending',
|
||||||
|
content: 'Test task content',
|
||||||
|
created_at: '2026-02-09T00:00:00.000Z',
|
||||||
|
started_at: null,
|
||||||
|
completed_at: null,
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
|
||||||
watcher = new TaskWatcher(testDir, { pollInterval: 50 });
|
watcher = new TaskWatcher(testDir, { pollInterval: 50 });
|
||||||
const processed: string[] = [];
|
const processed: string[] = [];
|
||||||
|
|
||||||
// Pre-create a task file
|
|
||||||
writeFileSync(
|
|
||||||
join(testDir, '.takt', 'tasks', 'test-task.md'),
|
|
||||||
'Test task content'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Start watching, stop after first task
|
|
||||||
const watchPromise = watcher.watch(async (task: TaskInfo) => {
|
const watchPromise = watcher.watch(async (task: TaskInfo) => {
|
||||||
processed.push(task.name);
|
processed.push(task.name);
|
||||||
// Stop after processing to avoid infinite loop in test
|
watcher?.stop();
|
||||||
watcher.stop();
|
|
||||||
});
|
});
|
||||||
|
|
||||||
await watchPromise;
|
await watchPromise;
|
||||||
@ -64,48 +69,61 @@ describe('TaskWatcher', () => {
|
|||||||
expect(watcher.isRunning()).toBe(false);
|
expect(watcher.isRunning()).toBe(false);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should wait when no tasks are available', async () => {
|
it('should wait when no tasks are available and then process added task', async () => {
|
||||||
|
writeTasksYaml([]);
|
||||||
watcher = new TaskWatcher(testDir, { pollInterval: 50 });
|
watcher = new TaskWatcher(testDir, { pollInterval: 50 });
|
||||||
let pollCount = 0;
|
const runner = new TaskRunner(testDir);
|
||||||
|
let processed = 0;
|
||||||
|
|
||||||
// Start watching, add a task after a delay
|
const watchPromise = watcher.watch(async () => {
|
||||||
const watchPromise = watcher.watch(async (task: TaskInfo) => {
|
processed++;
|
||||||
pollCount++;
|
watcher?.stop();
|
||||||
watcher.stop();
|
|
||||||
});
|
});
|
||||||
|
|
||||||
// Add task after short delay (after at least one empty poll)
|
|
||||||
await new Promise((resolve) => setTimeout(resolve, 100));
|
await new Promise((resolve) => setTimeout(resolve, 100));
|
||||||
writeFileSync(
|
runner.addTask('Delayed task');
|
||||||
join(testDir, '.takt', 'tasks', 'delayed-task.md'),
|
|
||||||
'Delayed task'
|
|
||||||
);
|
|
||||||
|
|
||||||
await watchPromise;
|
await watchPromise;
|
||||||
|
|
||||||
expect(pollCount).toBe(1);
|
expect(processed).toBe(1);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should process multiple tasks sequentially', async () => {
|
it('should process multiple tasks sequentially', async () => {
|
||||||
|
writeTasksYaml([
|
||||||
|
{
|
||||||
|
name: 'a-task',
|
||||||
|
status: 'pending',
|
||||||
|
content: 'First task',
|
||||||
|
created_at: '2026-02-09T00:00:00.000Z',
|
||||||
|
started_at: null,
|
||||||
|
completed_at: null,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: 'b-task',
|
||||||
|
status: 'pending',
|
||||||
|
content: 'Second task',
|
||||||
|
created_at: '2026-02-09T00:01:00.000Z',
|
||||||
|
started_at: null,
|
||||||
|
completed_at: null,
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
|
||||||
|
const runner = new TaskRunner(testDir);
|
||||||
watcher = new TaskWatcher(testDir, { pollInterval: 50 });
|
watcher = new TaskWatcher(testDir, { pollInterval: 50 });
|
||||||
const processed: string[] = [];
|
const processed: string[] = [];
|
||||||
|
|
||||||
// Pre-create two task files
|
|
||||||
writeFileSync(
|
|
||||||
join(testDir, '.takt', 'tasks', 'a-task.md'),
|
|
||||||
'First task'
|
|
||||||
);
|
|
||||||
writeFileSync(
|
|
||||||
join(testDir, '.takt', 'tasks', 'b-task.md'),
|
|
||||||
'Second task'
|
|
||||||
);
|
|
||||||
|
|
||||||
const watchPromise = watcher.watch(async (task: TaskInfo) => {
|
const watchPromise = watcher.watch(async (task: TaskInfo) => {
|
||||||
processed.push(task.name);
|
processed.push(task.name);
|
||||||
// Remove the task file to simulate completion
|
runner.completeTask({
|
||||||
rmSync(task.filePath);
|
task,
|
||||||
|
success: true,
|
||||||
|
response: 'Done',
|
||||||
|
executionLog: [],
|
||||||
|
startedAt: new Date().toISOString(),
|
||||||
|
completedAt: new Date().toISOString(),
|
||||||
|
});
|
||||||
if (processed.length >= 2) {
|
if (processed.length >= 2) {
|
||||||
watcher.stop();
|
watcher?.stop();
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -117,15 +135,13 @@ describe('TaskWatcher', () => {
|
|||||||
|
|
||||||
describe('stop', () => {
|
describe('stop', () => {
|
||||||
it('should stop the watcher gracefully', async () => {
|
it('should stop the watcher gracefully', async () => {
|
||||||
|
writeTasksYaml([]);
|
||||||
watcher = new TaskWatcher(testDir, { pollInterval: 50 });
|
watcher = new TaskWatcher(testDir, { pollInterval: 50 });
|
||||||
|
|
||||||
// Start watching, stop after a short delay
|
|
||||||
const watchPromise = watcher.watch(async () => {
|
const watchPromise = watcher.watch(async () => {
|
||||||
// Should not be called since no tasks
|
|
||||||
});
|
});
|
||||||
|
|
||||||
// Stop after short delay
|
setTimeout(() => watcher?.stop(), 100);
|
||||||
setTimeout(() => watcher.stop(), 100);
|
|
||||||
|
|
||||||
await watchPromise;
|
await watchPromise;
|
||||||
|
|
||||||
@ -133,18 +149,17 @@ describe('TaskWatcher', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should abort sleep immediately when stopped', async () => {
|
it('should abort sleep immediately when stopped', async () => {
|
||||||
|
writeTasksYaml([]);
|
||||||
watcher = new TaskWatcher(testDir, { pollInterval: 10000 });
|
watcher = new TaskWatcher(testDir, { pollInterval: 10000 });
|
||||||
|
|
||||||
const start = Date.now();
|
const start = Date.now();
|
||||||
const watchPromise = watcher.watch(async () => {});
|
const watchPromise = watcher.watch(async () => {});
|
||||||
|
|
||||||
// Stop after 50ms, should not wait the full 10s
|
setTimeout(() => watcher?.stop(), 50);
|
||||||
setTimeout(() => watcher.stop(), 50);
|
|
||||||
|
|
||||||
await watchPromise;
|
await watchPromise;
|
||||||
|
|
||||||
const elapsed = Date.now() - start;
|
const elapsed = Date.now() - start;
|
||||||
// Should complete well under the 10s poll interval
|
|
||||||
expect(elapsed).toBeLessThan(1000);
|
expect(elapsed).toBeLessThan(1000);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@ -142,7 +142,7 @@ describe('runWithWorkerPool', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should not pass taskPrefix or abortSignal for sequential execution (concurrency = 1)', async () => {
|
it('should pass abortSignal but not taskPrefix for sequential execution (concurrency = 1)', async () => {
|
||||||
// Given
|
// Given
|
||||||
const tasks = [createTask('seq-task')];
|
const tasks = [createTask('seq-task')];
|
||||||
const runner = createMockTaskRunner([]);
|
const runner = createMockTaskRunner([]);
|
||||||
@ -154,7 +154,7 @@ describe('runWithWorkerPool', () => {
|
|||||||
expect(mockExecuteAndCompleteTask).toHaveBeenCalledTimes(1);
|
expect(mockExecuteAndCompleteTask).toHaveBeenCalledTimes(1);
|
||||||
const parallelOpts = mockExecuteAndCompleteTask.mock.calls[0]?.[5];
|
const parallelOpts = mockExecuteAndCompleteTask.mock.calls[0]?.[5];
|
||||||
expect(parallelOpts).toEqual({
|
expect(parallelOpts).toEqual({
|
||||||
abortSignal: undefined,
|
abortSignal: expect.any(AbortSignal),
|
||||||
taskPrefix: undefined,
|
taskPrefix: undefined,
|
||||||
taskColorIndex: undefined,
|
taskColorIndex: undefined,
|
||||||
});
|
});
|
||||||
@ -250,6 +250,51 @@ describe('runWithWorkerPool', () => {
|
|||||||
expect(result).toEqual({ success: 0, fail: 1 });
|
expect(result).toEqual({ success: 0, fail: 1 });
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should wait for in-flight tasks to settle after SIGINT before returning', async () => {
|
||||||
|
// Given: Two running tasks that resolve after abort is triggered.
|
||||||
|
const tasks = [createTask('t1'), createTask('t2')];
|
||||||
|
const runner = createMockTaskRunner([]);
|
||||||
|
const deferred: Array<() => void> = [];
|
||||||
|
const startedSignals: AbortSignal[] = [];
|
||||||
|
|
||||||
|
mockExecuteAndCompleteTask.mockImplementation((_task, _runner, _cwd, _piece, _opts, parallelOpts) => {
|
||||||
|
const signal = parallelOpts?.abortSignal;
|
||||||
|
if (signal) startedSignals.push(signal);
|
||||||
|
return new Promise<boolean>((resolve) => {
|
||||||
|
if (signal) {
|
||||||
|
signal.addEventListener('abort', () => deferred.push(() => resolve(false)), { once: true });
|
||||||
|
} else {
|
||||||
|
deferred.push(() => resolve(true));
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
const resultPromise = runWithWorkerPool(
|
||||||
|
runner as never, tasks, 2, '/cwd', 'default', undefined, TEST_POLL_INTERVAL_MS,
|
||||||
|
);
|
||||||
|
|
||||||
|
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||||
|
|
||||||
|
const sigintListeners = process.rawListeners('SIGINT') as ((...args: unknown[]) => void)[];
|
||||||
|
const handler = sigintListeners[sigintListeners.length - 1];
|
||||||
|
expect(handler).toBeDefined();
|
||||||
|
handler!();
|
||||||
|
|
||||||
|
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||||
|
expect(startedSignals).toHaveLength(2);
|
||||||
|
for (const signal of startedSignals) {
|
||||||
|
expect(signal.aborted).toBe(true);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const resolveTask of deferred) {
|
||||||
|
resolveTask();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Then: pool returns after in-flight tasks settle, counting them as failures.
|
||||||
|
const result = await resultPromise;
|
||||||
|
expect(result).toEqual({ success: 0, fail: 2 });
|
||||||
|
});
|
||||||
|
|
||||||
describe('polling', () => {
|
describe('polling', () => {
|
||||||
it('should pick up tasks added during execution via polling', async () => {
|
it('should pick up tasks added during execution via polling', async () => {
|
||||||
// Given: 1 initial task running with concurrency=2, a second task appears via poll
|
// Given: 1 initial task running with concurrency=2, a second task appears via poll
|
||||||
|
|||||||
@ -48,6 +48,7 @@ export const StatusSchema = z.enum([
|
|||||||
'pending',
|
'pending',
|
||||||
'done',
|
'done',
|
||||||
'blocked',
|
'blocked',
|
||||||
|
'error',
|
||||||
'approved',
|
'approved',
|
||||||
'rejected',
|
'rejected',
|
||||||
'improve',
|
'improve',
|
||||||
|
|||||||
@ -10,6 +10,7 @@ export type Status =
|
|||||||
| 'pending'
|
| 'pending'
|
||||||
| 'done'
|
| 'done'
|
||||||
| 'blocked'
|
| 'blocked'
|
||||||
|
| 'error'
|
||||||
| 'approved'
|
| 'approved'
|
||||||
| 'rejected'
|
| 'rejected'
|
||||||
| 'improve'
|
| 'improve'
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user