Release v0.12.0 (#241)

* takt: github-issue-193-takt-add-issue (#199)

* 一時的に追加

* github-issue-200-arpeggio (#203)

* fix: stable release時にnext dist-tagを自動同期

* takt: github-issue-200-arpeggio

* github-issue-201-completetask-completed-tasks-yaml (#202)

* fix: stable release時にnext dist-tagを自動同期

* takt: github-issue-201-completetask-completed-tasks-yaml

* takt: github-issue-204-takt-tasks (#205)

* feat: frontend特化ピースを追加し並列arch-reviewを導入

* chore: pieceカテゴリのja/en並びと表記を整理

* takt: github-issue-207-previous-response-source-path (#210)

* fix: callAiJudgeをプロバイダーシステム経由に変更(Codex対応)

callAiJudgeがinfra/claude/にハードコードされており、Codexプロバイダー使用時に
judge評価が動作しなかった。agents/ai-judge.tsに移動し、runAgent経由で
プロバイダーを正しく解決するように修正。

* Release v0.11.1

* takt/#209/update review history logs (#213)

* fix: callAiJudgeをプロバイダーシステム経由に変更(Codex対応)

callAiJudgeがinfra/claude/にハードコードされており、Codexプロバイダー使用時に
judge評価が動作しなかった。agents/ai-judge.tsに移動し、runAgent経由で
プロバイダーを正しく解決するように修正。

* takt: github-issue-209

* takt: github-issue-198-e2e-config-yaml (#208)

* takt: github-issue-194-takt-add (#206)

* slug エージェントが暴走するのを対処

* 暴走抑止

* chore: add completion logs for branch and issue generation

* progressをわかりやすくする

* fix

* test: add withProgress mock in selectAndExecute autoPr test

* takt: github-issue-212-max-iteration-max-movement-ostinato (#217)

* takt: github-issue-180-ai (#219)

* takt: github-issue-163-report-phase-blocked (#218)

* Issue  作成時にタスクを積むかを確認

* takt: opencode (#222)

* takt: github-issue-192-e2e-test (#221)

* takt: issue (#220)

* ポート競合回避

* opencode 対応

* pass_previous_responseを復活

* takt: task-1770764964345 (#225)

* opencode でプロンプトがechoされる問題を修正

* opencode がハングする問題を修正

* worktreeにタスク指示書をコピー

* opencode の question を抑制

* Provider およびモデル名を出力

* fix: lint errors in merge/resolveTask/confirm

* fix: opencode permission and tool wiring for edit execution

* opencodeの終了判定が誤っていたので修正

* add e2e for opencode

* add test

* takt: github-issue-236-feat-claude-codex-opencode (#239)

* takt: slackweb (#234)

* takt: github-issue-238-fix-opencode (#240)

* Release v0.12.0

* provider event log default false
This commit is contained in:
nrs 2026-02-11 17:13:36 +09:00 committed by GitHub
commit 86e80f33aa
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
259 changed files with 11868 additions and 1152 deletions

View File

@ -4,6 +4,45 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/). The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
## [0.12.0] - 2026-02-11
### Added
- **OpenCode プロバイダー**: 第3のプロバイダーとして OpenCode をネイティブサポート — `@opencode-ai/sdk/v2` による SDK 統合、権限マッピングreadonly/edit/full → reject/once/always、SSE ストリーム処理、リトライ機構最大3回、10分タイムアウトによるハング検出 (#236, #238)
- **Arpeggio ムーブメント**: データ駆動バッチ処理の新ムーブメントタイプ — CSV データソースからバッチ分割、テンプレート展開(`{line:N}`, `{col:N:name}`, `{batch_index}`)、並行 LLM 呼び出しSemaphore 制御、concat/custom マージ戦略をサポート (#200)
- **`frontend` ビルトインピース**: フロントエンド開発特化のピースを新規追加 — React/Next.js 向けの knowledge 注入、coding/testing ポリシー適用、並列アーキテクチャレビュー対応
- **Slack Webhook 通知**: ピース実行完了時に Slack へ自動通知 — `TAKT_NOTIFY_WEBHOOK` 環境変数で設定、10秒タイムアウト、失敗時も他処理をブロックしない (#234)
- **セッション選択 UI**: インタラクティブモード開始時に Claude Code の過去セッションから再開可能なセッションを選択可能に — 最新10セッションの一覧表示、初期入力・最終応答プレビュー付き (#180)
- **プロバイダーイベントログ**: Claude/Codex/OpenCode の実行中イベントを NDJSON 形式でファイル出力 — `.takt/logs/{sessionId}-provider-events.jsonl` に記録、長大テキストの自動圧縮 (#236)
- **プロバイダー・モデル名の出力表示**: 各ムーブメント実行時に使用中のプロバイダーとモデル名をコンソールに表示
### Changed
- **`takt add` の刷新**: Issue 選択時にタスクへの自動追加、インタラクティブモードの廃止、Issue 作成時のタスク積み込み確認 (#193, #194)
- **`max_iteration``max_movement` 統一**: イテレーション上限の用語を統一し、無限実行指定として `ostinato` を追加 (#212)
- **`previous_response` 注入仕様の改善**: 長さ制御と Source Path 常時注入を実装 (#207)
- **タスク管理の改善**: `.takt/tasks/` を長文タスク仕様の置き場所として再定義、`completeTask()` で completed レコードを `tasks.yaml` から削除 (#201, #204)
- **レビュー出力の改善**: レビュー出力を最新化し、過去レポートは履歴ログへ分離 (#209)
- **ビルトインピース簡素化**: 全ビルトインピースのトップレベル宣言をさらに整理
### Fixed
- **Report Phase blocked 時の動作修正**: Report PhasePhase 2で blocked 状態の際に新規セッションでリトライするよう修正 (#163)
- **OpenCode のハング・終了判定の修正**: プロンプトのエコー抑制、question の抑制、ハング問題の修正、終了判定の誤りを修正 (#238)
- **OpenCode の権限・ツール設定の修正**: edit 実行時の権限とツール配線を修正
- **Worktree へのタスク指示書コピー**: Worktree 実行時にタスク指示書が正しくコピーされるよう修正
- lint エラーの修正merge/resolveTask/confirm
### Internal
- OpenCode プロバイダーの包括的なテスト追加client-cleanup, config, provider, stream-handler, types
- Arpeggio の包括的なテスト追加csv, data-source-factory, merge, schema, template, engine-arpeggio
- E2E テストの大幅な拡充: cli-catalog, cli-clear, cli-config, cli-export-cc, cli-help, cli-prompt, cli-reset-categories, cli-switch, error-handling, piece-error-handling, provider-error, quiet-mode, run-multiple-tasks, task-content-file (#192, #198)
- `providerEventLogger.ts`, `providerModel.ts`, `slackWebhook.ts`, `session-reader.ts`, `sessionSelector.ts`, `provider-resolution.ts`, `run-paths.ts` の新規追加
- `ArpeggioRunner.ts` の新規追加(データ駆動バッチ処理エンジン)
- AI Judge をプロバイダーシステム経由に変更Codex/OpenCode 対応)
- テスト追加・拡充: report-phase-blocked, phase-runner-report-history, judgment-fallback, pieceExecution-session-loading, globalConfig-defaults, session-reader, sessionSelector, slackWebhook, providerEventLogger, provider-model, interactive, run-paths, engine-test-helpers
## [0.11.1] - 2026-02-10 ## [0.11.1] - 2026-02-10
### Fixed ### Fixed

View File

@ -218,7 +218,7 @@ Builtin resources are embedded in the npm package (`builtins/`). User files in `
```yaml ```yaml
name: piece-name name: piece-name
description: Optional description description: Optional description
max_iterations: 10 max_movements: 10
initial_step: plan # First step to execute initial_step: plan # First step to execute
steps: steps:
@ -291,7 +291,7 @@ Key points about parallel steps:
|----------|-------------| |----------|-------------|
| `{task}` | Original user request (auto-injected if not in template) | | `{task}` | Original user request (auto-injected if not in template) |
| `{iteration}` | Piece-wide iteration count | | `{iteration}` | Piece-wide iteration count |
| `{max_iterations}` | Maximum iterations allowed | | `{max_movements}` | Maximum movements allowed |
| `{step_iteration}` | Per-step iteration count | | `{step_iteration}` | Per-step iteration count |
| `{previous_response}` | Previous step output (auto-injected if not in template) | | `{previous_response}` | Previous step output (auto-injected if not in template) |
| `{user_inputs}` | Accumulated user inputs (auto-injected if not in template) | | `{user_inputs}` | Accumulated user inputs (auto-injected if not in template) |
@ -406,7 +406,7 @@ Key constraints:
- **Ephemeral lifecycle**: Clone is created → task runs → auto-commit + push → clone is deleted. Branches are the single source of truth. - **Ephemeral lifecycle**: Clone is created → task runs → auto-commit + push → clone is deleted. Branches are the single source of truth.
- **Session isolation**: Claude Code sessions are stored per-cwd in `~/.claude/projects/{encoded-path}/`. Sessions from the main project cannot be resumed in a clone. The engine skips session resume when `cwd !== projectCwd`. - **Session isolation**: Claude Code sessions are stored per-cwd in `~/.claude/projects/{encoded-path}/`. Sessions from the main project cannot be resumed in a clone. The engine skips session resume when `cwd !== projectCwd`.
- **No node_modules**: Clones only contain tracked files. `node_modules/` is absent. - **No node_modules**: Clones only contain tracked files. `node_modules/` is absent.
- **Dual cwd**: `cwd` = clone path (where agents run), `projectCwd` = project root. Reports write to `cwd/.takt/reports/` (clone) to prevent agents from discovering the main repository. Logs and session data write to `projectCwd`. - **Dual cwd**: `cwd` = clone path (where agents run), `projectCwd` = project root. Reports write to `cwd/.takt/runs/{slug}/reports/` (clone) to prevent agents from discovering the main repository. Logs and session data write to `projectCwd`.
- **List**: Use `takt list` to list branches. Instruct action creates a temporary clone for the branch, executes, pushes, then removes the clone. - **List**: Use `takt list` to list branches. Instruct action creates a temporary clone for the branch, executes, pushes, then removes the clone.
## Error Propagation ## Error Propagation
@ -455,10 +455,10 @@ Debug logs are written to `.takt/logs/debug.log` (ndjson format). Log levels: `d
- If persona file doesn't exist, the persona string is used as inline system prompt - If persona file doesn't exist, the persona string is used as inline system prompt
**Report directory structure:** **Report directory structure:**
- Report dirs are created at `.takt/reports/{timestamp}-{slug}/` - Report dirs are created at `.takt/runs/{timestamp}-{slug}/reports/`
- Report files specified in `step.report` are written relative to report dir - Report files specified in `step.report` are written relative to report dir
- Report dir path is available as `{report_dir}` variable in instruction templates - Report dir path is available as `{report_dir}` variable in instruction templates
- When `cwd !== projectCwd` (worktree execution), reports write to `cwd/.takt/reports/` (clone dir) to prevent agents from discovering the main repository path - When `cwd !== projectCwd` (worktree execution), reports write to `cwd/.takt/runs/{slug}/reports/` (clone dir) to prevent agents from discovering the main repository path
**Session continuity across phases:** **Session continuity across phases:**
- Agent sessions persist across Phase 1 → Phase 2 → Phase 3 for context continuity - Agent sessions persist across Phase 1 → Phase 2 → Phase 3 for context continuity
@ -470,7 +470,7 @@ Debug logs are written to `.takt/logs/debug.log` (ndjson format). Log levels: `d
- `git clone --shared` creates independent `.git` directory (not `git worktree`) - `git clone --shared` creates independent `.git` directory (not `git worktree`)
- Clone cwd ≠ project cwd: agents work in clone, reports write to clone, logs write to project - Clone cwd ≠ project cwd: agents work in clone, reports write to clone, logs write to project
- Session resume is skipped when `cwd !== projectCwd` to avoid cross-directory contamination - Session resume is skipped when `cwd !== projectCwd` to avoid cross-directory contamination
- Reports write to `cwd/.takt/reports/` (clone) to prevent agents from discovering the main repository path via instruction - Reports write to `cwd/.takt/runs/{slug}/reports/` (clone) to prevent agents from discovering the main repository path via instruction
- Clones are ephemeral: created → task runs → auto-commit + push → deleted - Clones are ephemeral: created → task runs → auto-commit + push → deleted
- Use `takt list` to manage task branches after clone deletion - Use `takt list` to manage task branches after clone deletion

130
README.md
View File

@ -4,7 +4,7 @@
**T**ask **A**gent **K**oordination **T**ool - Define how AI agents coordinate, where humans intervene, and what gets recorded — in YAML **T**ask **A**gent **K**oordination **T**ool - Define how AI agents coordinate, where humans intervene, and what gets recorded — in YAML
TAKT runs multiple AI agents (Claude Code, Codex) through YAML-defined workflows. Each step — who runs, what they see, what's allowed, what happens on failure — is declared in a piece file, not left to the agent. TAKT runs multiple AI agents (Claude Code, Codex, OpenCode) through YAML-defined workflows. Each step — who runs, what they see, what's allowed, what happens on failure — is declared in a piece file, not left to the agent.
TAKT is built with TAKT itself (dogfooding). TAKT is built with TAKT itself (dogfooding).
@ -49,14 +49,14 @@ Personas, policies, and knowledge are managed as independent files and freely co
Choose one: Choose one:
- **Use provider CLIs**: [Claude Code](https://docs.anthropic.com/en/docs/claude-code) or [Codex](https://github.com/openai/codex) installed - **Use provider CLIs**: [Claude Code](https://docs.anthropic.com/en/docs/claude-code), [Codex](https://github.com/openai/codex), or [OpenCode](https://opencode.ai) installed
- **Use direct API**: **Anthropic API Key** or **OpenAI API Key** (no CLI required) - **Use direct API**: **Anthropic API Key**, **OpenAI API Key**, or **OpenCode API Key** (no CLI required)
Additionally required: Additionally required:
- [GitHub CLI](https://cli.github.com/) (`gh`) — Only needed for `takt #N` (GitHub Issue execution) - [GitHub CLI](https://cli.github.com/) (`gh`) — Only needed for `takt #N` (GitHub Issue execution)
**Pricing Note**: When using API Keys, TAKT directly calls the Claude API (Anthropic) or OpenAI API. The pricing structure is the same as using Claude Code or Codex. Be mindful of costs, especially when running automated tasks in CI/CD environments, as API usage can accumulate. **Pricing Note**: When using API Keys, TAKT directly calls the Claude API (Anthropic), OpenAI API, or OpenCode API. The pricing structure is the same as using the respective CLI tools. Be mindful of costs, especially when running automated tasks in CI/CD environments, as API usage can accumulate.
## Installation ## Installation
@ -186,7 +186,7 @@ takt #6 --auto-pr
### Task Management (add / run / watch / list) ### Task Management (add / run / watch / list)
Batch processing using task files (`.takt/tasks/`). Useful for accumulating multiple tasks and executing them together later. Batch processing using `.takt/tasks.yaml` with task directories under `.takt/tasks/{slug}/`. Useful for accumulating multiple tasks and executing them together later.
#### Add Task (`takt add`) #### Add Task (`takt add`)
@ -201,14 +201,14 @@ takt add #28
#### Execute Tasks (`takt run`) #### Execute Tasks (`takt run`)
```bash ```bash
# Execute all pending tasks in .takt/tasks/ # Execute all pending tasks in .takt/tasks.yaml
takt run takt run
``` ```
#### Watch Tasks (`takt watch`) #### Watch Tasks (`takt watch`)
```bash ```bash
# Monitor .takt/tasks/ and auto-execute tasks (resident process) # Monitor .takt/tasks.yaml and auto-execute tasks (resident process)
takt watch takt watch
``` ```
@ -225,6 +225,13 @@ takt list --non-interactive --action delete --branch takt/my-branch --yes
takt list --non-interactive --format json takt list --non-interactive --format json
``` ```
#### Task Directory Workflow (Create / Run / Verify)
1. Run `takt add` and confirm a pending record is created in `.takt/tasks.yaml`.
2. Open the generated `.takt/tasks/{slug}/order.md` and add detailed specifications/references as needed.
3. Run `takt run` (or `takt watch`) to execute pending tasks from `tasks.yaml`.
4. Verify outputs in `.takt/runs/{slug}/reports/` using the same slug as `task_dir`.
### Pipeline Mode (for CI/Automation) ### Pipeline Mode (for CI/Automation)
Specifying `--pipeline` enables non-interactive pipeline mode. Automatically creates branch → runs piece → commits & pushes. Suitable for CI/CD automation. Specifying `--pipeline` enables non-interactive pipeline mode. Automatically creates branch → runs piece → commits & pushes. Suitable for CI/CD automation.
@ -315,7 +322,7 @@ takt reset categories
| `--repo <owner/repo>` | Specify repository (for PR creation) | | `--repo <owner/repo>` | Specify repository (for PR creation) |
| `--create-worktree <yes\|no>` | Skip worktree confirmation prompt | | `--create-worktree <yes\|no>` | Skip worktree confirmation prompt |
| `-q, --quiet` | Minimal output mode: suppress AI output (for CI) | | `-q, --quiet` | Minimal output mode: suppress AI output (for CI) |
| `--provider <name>` | Override agent provider (claude\|codex\|mock) | | `--provider <name>` | Override agent provider (claude\|codex\|opencode\|mock) |
| `--model <name>` | Override agent model | | `--model <name>` | Override agent model |
## Pieces ## Pieces
@ -328,7 +335,7 @@ TAKT uses YAML-based piece definitions and rule-based routing. Builtin pieces ar
```yaml ```yaml
name: default name: default
max_iterations: 10 max_movements: 10
initial_movement: plan initial_movement: plan
# Section maps — key: file path (relative to this YAML) # Section maps — key: file path (relative to this YAML)
@ -466,6 +473,7 @@ TAKT includes multiple builtin pieces:
| `structural-reform` | Full project review and structural reform: iterative codebase restructuring with staged file splits. | | `structural-reform` | Full project review and structural reform: iterative codebase restructuring with staged file splits. |
| `unit-test` | Unit test focused piece: test analysis → test implementation → review → fix. | | `unit-test` | Unit test focused piece: test analysis → test implementation → review → fix. |
| `e2e-test` | E2E test focused piece: E2E analysis → E2E implementation → review → fix (Vitest-based E2E flow). | | `e2e-test` | E2E test focused piece: E2E analysis → E2E implementation → review → fix (Vitest-based E2E flow). |
| `frontend` | Frontend-specialized development piece with React/Next.js focused reviews and knowledge injection. |
**Per-persona provider overrides:** Use `persona_providers` in config to route specific personas to different providers (e.g., coder on Codex, reviewers on Claude) without duplicating pieces. **Per-persona provider overrides:** Use `persona_providers` in config to route specific personas to different providers (e.g., coder on Codex, reviewers on Claude) without duplicating pieces.
@ -532,14 +540,14 @@ The model string is passed to the Codex SDK. If unspecified, defaults to `codex`
.takt/ # Project-level configuration .takt/ # Project-level configuration
├── config.yaml # Project config (current piece, etc.) ├── config.yaml # Project config (current piece, etc.)
├── tasks/ # Pending task files (.yaml, .md) ├── tasks/ # Task input directories (.takt/tasks/{slug}/order.md, etc.)
├── completed/ # Completed tasks and reports ├── tasks.yaml # Pending tasks metadata (task_dir, piece, worktree, etc.)
├── reports/ # Execution reports (auto-generated) └── runs/ # Run-scoped artifacts
│ └── {timestamp}-{slug}/ └── {slug}/
└── logs/ # NDJSON format session logs ├── reports/ # Execution reports (auto-generated)
├── latest.json # Pointer to current/latest session ├── context/ # knowledge/policy/previous_response snapshots
├── previous.json # Pointer to previous session ├── logs/ # NDJSON session logs for this run
└── {sessionId}.jsonl # NDJSON session log per piece execution └── meta.json # Run metadata
``` ```
Builtin resources are embedded in the npm package (`builtins/`). User files in `~/.takt/` take priority. Builtin resources are embedded in the npm package (`builtins/`). User files in `~/.takt/` take priority.
@ -553,11 +561,17 @@ Configure default provider and model in `~/.takt/config.yaml`:
language: en language: en
default_piece: default default_piece: default
log_level: info log_level: info
provider: claude # Default provider: claude or codex provider: claude # Default provider: claude, codex, or opencode
model: sonnet # Default model (optional) model: sonnet # Default model (optional)
branch_name_strategy: romaji # Branch name generation: 'romaji' (fast) or 'ai' (slow) branch_name_strategy: romaji # Branch name generation: 'romaji' (fast) or 'ai' (slow)
prevent_sleep: false # Prevent macOS idle sleep during execution (caffeinate) prevent_sleep: false # Prevent macOS idle sleep during execution (caffeinate)
notification_sound: true # Enable/disable notification sounds notification_sound: true # Enable/disable notification sounds
notification_sound_events: # Optional per-event toggles
iteration_limit: false
piece_complete: true
piece_abort: true
run_complete: true # Enabled by default; set false to disable
run_abort: true # Enabled by default; set false to disable
concurrency: 1 # Parallel task count for takt run (1-10, default: 1 = sequential) concurrency: 1 # Parallel task count for takt run (1-10, default: 1 = sequential)
task_poll_interval_ms: 500 # Polling interval for new tasks during takt run (100-5000, default: 500) task_poll_interval_ms: 500 # Polling interval for new tasks during takt run (100-5000, default: 500)
interactive_preview_movements: 3 # Movement previews in interactive mode (0-10, default: 3) interactive_preview_movements: 3 # Movement previews in interactive mode (0-10, default: 3)
@ -569,9 +583,10 @@ interactive_preview_movements: 3 # Movement previews in interactive mode (0-10,
# ai-antipattern-reviewer: claude # Keep reviewers on Claude # ai-antipattern-reviewer: claude # Keep reviewers on Claude
# API Key configuration (optional) # API Key configuration (optional)
# Can be overridden by environment variables TAKT_ANTHROPIC_API_KEY / TAKT_OPENAI_API_KEY # Can be overridden by environment variables TAKT_ANTHROPIC_API_KEY / TAKT_OPENAI_API_KEY / TAKT_OPENCODE_API_KEY
anthropic_api_key: sk-ant-... # For Claude (Anthropic) anthropic_api_key: sk-ant-... # For Claude (Anthropic)
# openai_api_key: sk-... # For Codex (OpenAI) # openai_api_key: sk-... # For Codex (OpenAI)
# opencode_api_key: ... # For OpenCode
# Builtin piece filtering (optional) # Builtin piece filtering (optional)
# builtin_pieces_enabled: true # Set false to disable all builtins # builtin_pieces_enabled: true # Set false to disable all builtins
@ -595,17 +610,17 @@ anthropic_api_key: sk-ant-... # For Claude (Anthropic)
1. **Set via environment variables**: 1. **Set via environment variables**:
```bash ```bash
export TAKT_ANTHROPIC_API_KEY=sk-ant-... # For Claude export TAKT_ANTHROPIC_API_KEY=sk-ant-... # For Claude
# or
export TAKT_OPENAI_API_KEY=sk-... # For Codex export TAKT_OPENAI_API_KEY=sk-... # For Codex
export TAKT_OPENCODE_API_KEY=... # For OpenCode
``` ```
2. **Set in config file**: 2. **Set in config file**:
Write `anthropic_api_key` or `openai_api_key` in `~/.takt/config.yaml` as shown above Write `anthropic_api_key`, `openai_api_key`, or `opencode_api_key` in `~/.takt/config.yaml` as shown above
Priority: Environment variables > `config.yaml` settings Priority: Environment variables > `config.yaml` settings
**Notes:** **Notes:**
- If you set an API Key, installing Claude Code or Codex is not necessary. TAKT directly calls the Anthropic API or OpenAI API. - If you set an API Key, installing Claude Code, Codex, or OpenCode is not necessary. TAKT directly calls the respective API.
- **Security**: If you write API Keys in `config.yaml`, be careful not to commit this file to Git. Consider using environment variables or adding `~/.takt/config.yaml` to `.gitignore`. - **Security**: If you write API Keys in `config.yaml`, be careful not to commit this file to Git. Consider using environment variables or adding `~/.takt/config.yaml` to `.gitignore`.
**Pipeline Template Variables:** **Pipeline Template Variables:**
@ -621,36 +636,43 @@ Priority: Environment variables > `config.yaml` settings
1. Piece movement `model` (highest priority) 1. Piece movement `model` (highest priority)
2. Custom agent `model` 2. Custom agent `model`
3. Global config `model` 3. Global config `model`
4. Provider default (Claude: sonnet, Codex: codex) 4. Provider default (Claude: sonnet, Codex: codex, OpenCode: provider default)
## Detailed Guides ## Detailed Guides
### Task File Formats ### Task Directory Format
TAKT supports batch processing with task files in `.takt/tasks/`. Both `.yaml`/`.yml` and `.md` file formats are supported. TAKT stores task metadata in `.takt/tasks.yaml`, and each task's long specification in `.takt/tasks/{slug}/`.
**YAML format** (recommended, supports worktree/branch/piece options): **Recommended layout**:
```text
.takt/
tasks/
20260201-015714-foptng/
order.md
schema.sql
wireframe.png
tasks.yaml
runs/
20260201-015714-foptng/
reports/
```
**tasks.yaml record**:
```yaml ```yaml
# .takt/tasks/add-auth.yaml tasks:
task: "Add authentication feature" - name: add-auth-feature
worktree: true # Execute in isolated shared clone status: pending
branch: "feat/add-auth" # Branch name (auto-generated if omitted) task_dir: .takt/tasks/20260201-015714-foptng
piece: "default" # Piece specification (uses current if omitted) piece: default
created_at: "2026-02-01T01:57:14.000Z"
started_at: null
completed_at: null
``` ```
**Markdown format** (simple, backward compatible): `takt add` creates `.takt/tasks/{slug}/order.md` automatically and saves `task_dir` to `tasks.yaml`.
```markdown
# .takt/tasks/add-login-feature.md
Add login feature to the application.
Requirements:
- Username and password fields
- Form validation
- Error handling on failure
```
#### Isolated Execution with Shared Clone #### Isolated Execution with Shared Clone
@ -667,15 +689,14 @@ Clones are ephemeral. After task completion, they auto-commit + push, then delet
### Session Logs ### Session Logs
TAKT writes session logs in NDJSON (`.jsonl`) format to `.takt/logs/`. Each record is atomically appended, so partial logs are preserved even if the process crashes, and you can track in real-time with `tail -f`. TAKT writes session logs in NDJSON (`.jsonl`) format to `.takt/runs/{slug}/logs/`. Each record is atomically appended, so partial logs are preserved even if the process crashes, and you can track in real-time with `tail -f`.
- `.takt/logs/latest.json` - Pointer to current (or latest) session - `.takt/runs/{slug}/logs/{sessionId}.jsonl` - NDJSON session log per piece execution
- `.takt/logs/previous.json` - Pointer to previous session - `.takt/runs/{slug}/meta.json` - Run metadata (`task`, `piece`, `start/end`, `status`, etc.)
- `.takt/logs/{sessionId}.jsonl` - NDJSON session log per piece execution
Record types: `piece_start`, `step_start`, `step_complete`, `piece_complete`, `piece_abort` Record types: `piece_start`, `step_start`, `step_complete`, `piece_complete`, `piece_abort`
Agents can read `previous.json` to inherit context from the previous execution. Session continuation is automatic — just run `takt "task"` to continue from the previous session. The latest previous response is stored at `.takt/runs/{slug}/context/previous_responses/latest.md` and inherited automatically.
### Adding Custom Pieces ### Adding Custom Pieces
@ -690,7 +711,7 @@ takt eject default
# ~/.takt/pieces/my-piece.yaml # ~/.takt/pieces/my-piece.yaml
name: my-piece name: my-piece
description: Custom piece description: Custom piece
max_iterations: 5 max_movements: 5
initial_movement: analyze initial_movement: analyze
personas: personas:
@ -740,11 +761,11 @@ Variables available in `instruction_template`:
|----------|-------------| |----------|-------------|
| `{task}` | Original user request (auto-injected if not in template) | | `{task}` | Original user request (auto-injected if not in template) |
| `{iteration}` | Piece-wide turn count (total steps executed) | | `{iteration}` | Piece-wide turn count (total steps executed) |
| `{max_iterations}` | Maximum iteration count | | `{max_movements}` | Maximum iteration count |
| `{movement_iteration}` | Per-movement iteration count (times this movement has been executed) | | `{movement_iteration}` | Per-movement iteration count (times this movement has been executed) |
| `{previous_response}` | Output from previous movement (auto-injected if not in template) | | `{previous_response}` | Output from previous movement (auto-injected if not in template) |
| `{user_inputs}` | Additional user inputs during piece (auto-injected if not in template) | | `{user_inputs}` | Additional user inputs during piece (auto-injected if not in template) |
| `{report_dir}` | Report directory path (e.g., `.takt/reports/20250126-143052-task-summary`) | | `{report_dir}` | Report directory path (e.g., `.takt/runs/20250126-143052-task-summary/reports`) |
| `{report:filename}` | Expands to `{report_dir}/filename` (e.g., `{report:00-plan.md}`) | | `{report:filename}` | Expands to `{report_dir}/filename` (e.g., `{report:00-plan.md}`) |
### Piece Design ### Piece Design
@ -777,7 +798,7 @@ Special `next` values: `COMPLETE` (success), `ABORT` (failure)
| `edit` | - | Whether movement can edit project files (`true`/`false`) | | `edit` | - | Whether movement can edit project files (`true`/`false`) |
| `pass_previous_response` | `true` | Pass previous movement output to `{previous_response}` | | `pass_previous_response` | `true` | Pass previous movement output to `{previous_response}` |
| `allowed_tools` | - | List of tools agent can use (Read, Glob, Grep, Edit, Write, Bash, etc.) | | `allowed_tools` | - | List of tools agent can use (Read, Glob, Grep, Edit, Write, Bash, etc.) |
| `provider` | - | Override provider for this movement (`claude` or `codex`) | | `provider` | - | Override provider for this movement (`claude`, `codex`, or `opencode`) |
| `model` | - | Override model for this movement | | `model` | - | Override model for this movement |
| `permission_mode` | - | Permission mode: `readonly`, `edit`, `full` (provider-independent) | | `permission_mode` | - | Permission mode: `readonly`, `edit`, `full` (provider-independent) |
| `output_contracts` | - | Output contract definitions for report files | | `output_contracts` | - | Output contract definitions for report files |
@ -855,7 +876,7 @@ npm install -g takt
takt --pipeline --task "Fix bug" --auto-pr --repo owner/repo takt --pipeline --task "Fix bug" --auto-pr --repo owner/repo
``` ```
For authentication, set `TAKT_ANTHROPIC_API_KEY` or `TAKT_OPENAI_API_KEY` environment variables (TAKT-specific prefix). For authentication, set `TAKT_ANTHROPIC_API_KEY`, `TAKT_OPENAI_API_KEY`, or `TAKT_OPENCODE_API_KEY` environment variables (TAKT-specific prefix).
```bash ```bash
# For Claude (Anthropic) # For Claude (Anthropic)
@ -863,6 +884,9 @@ export TAKT_ANTHROPIC_API_KEY=sk-ant-...
# For Codex (OpenAI) # For Codex (OpenAI)
export TAKT_OPENAI_API_KEY=sk-... export TAKT_OPENAI_API_KEY=sk-...
# For OpenCode
export TAKT_OPENCODE_API_KEY=...
``` ```
## Documentation ## Documentation

View File

@ -6,6 +6,18 @@ piece_categories:
- coding - coding
- minimal - minimal
- compound-eye - compound-eye
🎨 Frontend:
pieces:
- frontend
⚙️ Backend: {}
🔧 Expert:
Full Stack:
pieces:
- expert
- expert-cqrs
🛠️ Refactoring:
pieces:
- structural-reform
🔍 Review: 🔍 Review:
pieces: pieces:
- review-fix-minimal - review-fix-minimal
@ -14,16 +26,6 @@ piece_categories:
pieces: pieces:
- unit-test - unit-test
- e2e-test - e2e-test
🎨 Frontend: {}
⚙️ Backend: {}
🔧 Expert:
Full Stack:
pieces:
- expert
- expert-cqrs
Refactoring:
pieces:
- structural-reform
Others: Others:
pieces: pieces:
- research - research

View File

@ -1,6 +1,6 @@
name: coding name: coding
description: Lightweight development piece with planning and parallel reviews (plan -> implement -> parallel review -> complete) description: Lightweight development piece with planning and parallel reviews (plan -> implement -> parallel review -> complete)
max_iterations: 20 max_movements: 20
initial_movement: plan initial_movement: plan
movements: movements:
- name: plan - name: plan

View File

@ -1,6 +1,6 @@
name: compound-eye name: compound-eye
description: Multi-model review - send the same instruction to Claude and Codex simultaneously, synthesize both responses description: Multi-model review - send the same instruction to Claude and Codex simultaneously, synthesize both responses
max_iterations: 10 max_movements: 10
initial_movement: evaluate initial_movement: evaluate
movements: movements:
- name: evaluate - name: evaluate

View File

@ -1,6 +1,6 @@
name: default name: default
description: Standard development piece with planning and specialized reviews description: Standard development piece with planning and specialized reviews
max_iterations: 30 max_movements: 30
initial_movement: plan initial_movement: plan
loop_monitors: loop_monitors:
- cycle: - cycle:

View File

@ -1,6 +1,6 @@
name: e2e-test name: e2e-test
description: E2E test focused piece (E2E analysis → E2E implementation → review → fix) description: E2E test focused piece (E2E analysis → E2E implementation → review → fix)
max_iterations: 20 max_movements: 20
initial_movement: plan_test initial_movement: plan_test
loop_monitors: loop_monitors:
- cycle: - cycle:

View File

@ -1,6 +1,6 @@
name: expert-cqrs name: expert-cqrs
description: CQRS+ES, Frontend, Security, QA Expert Review description: CQRS+ES, Frontend, Security, QA Expert Review
max_iterations: 30 max_movements: 30
initial_movement: plan initial_movement: plan
movements: movements:
- name: plan - name: plan
@ -26,7 +26,6 @@ movements:
- name: implement - name: implement
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing
@ -87,7 +86,6 @@ movements:
- name: ai_fix - name: ai_fix
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing
@ -218,7 +216,6 @@ movements:
- name: fix - name: fix
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing
@ -267,7 +264,6 @@ movements:
- name: fix_supervisor - name: fix_supervisor
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing

View File

@ -1,6 +1,6 @@
name: expert name: expert
description: Architecture, Frontend, Security, QA Expert Review description: Architecture, Frontend, Security, QA Expert Review
max_iterations: 30 max_movements: 30
initial_movement: plan initial_movement: plan
movements: movements:
- name: plan - name: plan
@ -26,7 +26,6 @@ movements:
- name: implement - name: implement
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing
@ -86,7 +85,6 @@ movements:
- name: ai_fix - name: ai_fix
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing
@ -216,7 +214,6 @@ movements:
- name: fix - name: fix
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing
@ -264,7 +261,6 @@ movements:
- name: fix_supervisor - name: fix_supervisor
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing

View File

@ -0,0 +1,282 @@
name: frontend
description: Frontend, Security, QA Expert Review
max_movements: 30
initial_movement: plan
movements:
- name: plan
edit: false
persona: planner
allowed_tools:
- Read
- Glob
- Grep
- Bash
- WebSearch
- WebFetch
instruction: plan
rules:
- condition: Task analysis and planning is complete
next: implement
- condition: Requirements are unclear and planning cannot proceed
next: ABORT
output_contracts:
report:
- name: 00-plan.md
format: plan
- name: implement
edit: true
persona: coder
policy:
- coding
- testing
session: refresh
knowledge:
- frontend
- security
- architecture
allowed_tools:
- Read
- Glob
- Grep
- Edit
- Write
- Bash
- WebSearch
- WebFetch
instruction: implement
rules:
- condition: Implementation is complete
next: ai_review
- condition: No implementation (report only)
next: ai_review
- condition: Cannot proceed with implementation
next: ai_review
- condition: User input required
next: implement
requires_user_input: true
interactive_only: true
output_contracts:
report:
- Scope: 01-coder-scope.md
- Decisions: 02-coder-decisions.md
- name: ai_review
edit: false
persona: ai-antipattern-reviewer
policy:
- review
- ai-antipattern
allowed_tools:
- Read
- Glob
- Grep
- WebSearch
- WebFetch
instruction: ai-review
rules:
- condition: No AI-specific issues found
next: reviewers
- condition: AI-specific issues detected
next: ai_fix
output_contracts:
report:
- name: 03-ai-review.md
format: ai-review
- name: ai_fix
edit: true
persona: coder
policy:
- coding
- testing
session: refresh
knowledge:
- frontend
- security
- architecture
allowed_tools:
- Read
- Glob
- Grep
- Edit
- Write
- Bash
- WebSearch
- WebFetch
instruction: ai-fix
rules:
- condition: AI Reviewer's issues have been fixed
next: ai_review
- condition: No fix needed (verified target files/spec)
next: ai_no_fix
- condition: Unable to proceed with fixes
next: ai_no_fix
- name: ai_no_fix
edit: false
persona: architecture-reviewer
policy: review
allowed_tools:
- Read
- Glob
- Grep
rules:
- condition: ai_review's findings are valid (fix required)
next: ai_fix
- condition: ai_fix's judgment is valid (no fix needed)
next: reviewers
instruction: arbitrate
- name: reviewers
parallel:
- name: arch-review
edit: false
persona: architecture-reviewer
policy: review
knowledge:
- architecture
- frontend
allowed_tools:
- Read
- Glob
- Grep
- WebSearch
- WebFetch
rules:
- condition: approved
- condition: needs_fix
instruction: review-arch
output_contracts:
report:
- name: 04-architect-review.md
format: architecture-review
- name: frontend-review
edit: false
persona: frontend-reviewer
policy: review
knowledge: frontend
allowed_tools:
- Read
- Glob
- Grep
- WebSearch
- WebFetch
rules:
- condition: approved
- condition: needs_fix
instruction: review-frontend
output_contracts:
report:
- name: 05-frontend-review.md
format: frontend-review
- name: security-review
edit: false
persona: security-reviewer
policy: review
knowledge: security
allowed_tools:
- Read
- Glob
- Grep
- WebSearch
- WebFetch
rules:
- condition: approved
- condition: needs_fix
instruction: review-security
output_contracts:
report:
- name: 06-security-review.md
format: security-review
- name: qa-review
edit: false
persona: qa-reviewer
policy:
- review
- qa
allowed_tools:
- Read
- Glob
- Grep
- WebSearch
- WebFetch
rules:
- condition: approved
- condition: needs_fix
instruction: review-qa
output_contracts:
report:
- name: 07-qa-review.md
format: qa-review
rules:
- condition: all("approved")
next: supervise
- condition: any("needs_fix")
next: fix
- name: fix
edit: true
persona: coder
policy:
- coding
- testing
knowledge:
- frontend
- security
- architecture
allowed_tools:
- Read
- Glob
- Grep
- Edit
- Write
- Bash
- WebSearch
- WebFetch
permission_mode: edit
rules:
- condition: Fix complete
next: reviewers
- condition: Cannot proceed, insufficient info
next: plan
instruction: fix
- name: supervise
edit: false
persona: expert-supervisor
policy: review
allowed_tools:
- Read
- Glob
- Grep
- WebSearch
- WebFetch
instruction: supervise
rules:
- condition: All validations pass and ready to merge
next: COMPLETE
- condition: Issues detected during final review
next: fix_supervisor
output_contracts:
report:
- Validation: 08-supervisor-validation.md
- Summary: summary.md
- name: fix_supervisor
edit: true
persona: coder
policy:
- coding
- testing
knowledge:
- frontend
- security
- architecture
allowed_tools:
- Read
- Glob
- Grep
- Edit
- Write
- Bash
- WebSearch
- WebFetch
instruction: fix-supervisor
rules:
- condition: Supervisor's issues have been fixed
next: supervise
- condition: Unable to proceed with fixes
next: plan

View File

@ -1,6 +1,6 @@
name: magi name: magi
description: MAGI Deliberation System - Analyze from 3 perspectives and decide by majority description: MAGI Deliberation System - Analyze from 3 perspectives and decide by majority
max_iterations: 5 max_movements: 5
initial_movement: melchior initial_movement: melchior
movements: movements:
- name: melchior - name: melchior

View File

@ -1,6 +1,6 @@
name: minimal name: minimal
description: Minimal development piece (implement -> parallel review -> fix if needed -> complete) description: Minimal development piece (implement -> parallel review -> fix if needed -> complete)
max_iterations: 20 max_movements: 20
initial_movement: implement initial_movement: implement
movements: movements:
- name: implement - name: implement

View File

@ -1,6 +1,6 @@
name: passthrough name: passthrough
description: Single-agent thin wrapper. Pass task directly to coder as-is. description: Single-agent thin wrapper. Pass task directly to coder as-is.
max_iterations: 10 max_movements: 10
initial_movement: execute initial_movement: execute
movements: movements:
- name: execute - name: execute

View File

@ -1,6 +1,6 @@
name: research name: research
description: Research piece - autonomously executes research without asking questions description: Research piece - autonomously executes research without asking questions
max_iterations: 10 max_movements: 10
initial_movement: plan initial_movement: plan
movements: movements:
- name: plan - name: plan
@ -13,7 +13,7 @@ movements:
- WebFetch - WebFetch
instruction_template: | instruction_template: |
## Piece Status ## Piece Status
- Iteration: {iteration}/{max_iterations} (piece-wide) - Iteration: {iteration}/{max_movements} (piece-wide)
- Movement Iteration: {movement_iteration} (times this movement has run) - Movement Iteration: {movement_iteration} (times this movement has run)
- Movement: plan - Movement: plan
@ -48,7 +48,7 @@ movements:
- WebFetch - WebFetch
instruction_template: | instruction_template: |
## Piece Status ## Piece Status
- Iteration: {iteration}/{max_iterations} (piece-wide) - Iteration: {iteration}/{max_movements} (piece-wide)
- Movement Iteration: {movement_iteration} (times this movement has run) - Movement Iteration: {movement_iteration} (times this movement has run)
- Movement: dig - Movement: dig
@ -88,7 +88,7 @@ movements:
- WebFetch - WebFetch
instruction_template: | instruction_template: |
## Piece Status ## Piece Status
- Iteration: {iteration}/{max_iterations} (piece-wide) - Iteration: {iteration}/{max_movements} (piece-wide)
- Movement Iteration: {movement_iteration} (times this movement has run) - Movement Iteration: {movement_iteration} (times this movement has run)
- Movement: supervise (research quality evaluation) - Movement: supervise (research quality evaluation)

View File

@ -1,6 +1,6 @@
name: review-fix-minimal name: review-fix-minimal
description: Review and fix piece for existing code (starts with review, no implementation) description: Review and fix piece for existing code (starts with review, no implementation)
max_iterations: 20 max_movements: 20
initial_movement: reviewers initial_movement: reviewers
movements: movements:
- name: implement - name: implement

View File

@ -1,6 +1,6 @@
name: review-only name: review-only
description: Review-only piece - reviews code without making edits description: Review-only piece - reviews code without making edits
max_iterations: 10 max_movements: 10
initial_movement: plan initial_movement: plan
movements: movements:
- name: plan - name: plan

View File

@ -1,6 +1,6 @@
name: structural-reform name: structural-reform
description: Full project review and structural reform - iterative codebase restructuring with staged file splits description: Full project review and structural reform - iterative codebase restructuring with staged file splits
max_iterations: 50 max_movements: 50
initial_movement: review initial_movement: review
loop_monitors: loop_monitors:
- cycle: - cycle:
@ -44,7 +44,7 @@ movements:
- WebFetch - WebFetch
instruction_template: | instruction_template: |
## Piece Status ## Piece Status
- Iteration: {iteration}/{max_iterations} (piece-wide) - Iteration: {iteration}/{max_movements} (piece-wide)
- Movement Iteration: {movement_iteration} (times this movement has run) - Movement Iteration: {movement_iteration} (times this movement has run)
- Movement: review (full project review) - Movement: review (full project review)
@ -126,7 +126,7 @@ movements:
- WebFetch - WebFetch
instruction_template: | instruction_template: |
## Piece Status ## Piece Status
- Iteration: {iteration}/{max_iterations} (piece-wide) - Iteration: {iteration}/{max_movements} (piece-wide)
- Movement Iteration: {movement_iteration} (times this movement has run) - Movement Iteration: {movement_iteration} (times this movement has run)
- Movement: plan_reform (reform plan creation) - Movement: plan_reform (reform plan creation)
@ -323,7 +323,7 @@ movements:
- WebFetch - WebFetch
instruction_template: | instruction_template: |
## Piece Status ## Piece Status
- Iteration: {iteration}/{max_iterations} (piece-wide) - Iteration: {iteration}/{max_movements} (piece-wide)
- Movement Iteration: {movement_iteration} (times this movement has run) - Movement Iteration: {movement_iteration} (times this movement has run)
- Movement: verify (build and test verification) - Movement: verify (build and test verification)
@ -378,7 +378,7 @@ movements:
- WebFetch - WebFetch
instruction_template: | instruction_template: |
## Piece Status ## Piece Status
- Iteration: {iteration}/{max_iterations} (piece-wide) - Iteration: {iteration}/{max_movements} (piece-wide)
- Movement Iteration: {movement_iteration} (times this movement has run) - Movement Iteration: {movement_iteration} (times this movement has run)
- Movement: next_target (progress check and next target selection) - Movement: next_target (progress check and next target selection)

View File

@ -1,6 +1,6 @@
name: unit-test name: unit-test
description: Unit test focused piece (test analysis → test implementation → review → fix) description: Unit test focused piece (test analysis → test implementation → review → fix)
max_iterations: 20 max_movements: 20
initial_movement: plan_test initial_movement: plan_test
loop_monitors: loop_monitors:
- cycle: - cycle:

View File

@ -82,9 +82,9 @@ InstructionBuilder が instruction_template 内の `{変数名}` を展開する
| 変数 | 内容 | | 変数 | 内容 |
|------|------| |------|------|
| `{iteration}` | ピース全体のイテレーション数 | | `{iteration}` | ピース全体のイテレーション数 |
| `{max_iterations}` | 最大イテレーション数 | | `{max_movements}` | 最大イテレーション数 |
| `{movement_iteration}` | ムーブメント単位のイテレーション数 | | `{movement_iteration}` | ムーブメント単位のイテレーション数 |
| `{report_dir}` | レポートディレクトリ名 | | `{report_dir}` | レポートディレクトリ名`.takt/runs/{slug}/reports` |
| `{report:filename}` | 指定レポートの内容展開(ファイルが存在する場合) | | `{report:filename}` | 指定レポートの内容展開(ファイルが存在する場合) |
| `{cycle_count}` | ループモニターで検出されたサイクル回数(`loop_monitors` 専用) | | `{cycle_count}` | ループモニターで検出されたサイクル回数(`loop_monitors` 専用) |
@ -222,7 +222,7 @@ InstructionBuilder が instruction_template 内の `{変数名}` を展開する
# 非許容 # 非許容
**参照するレポート:** **参照するレポート:**
- .takt/reports/20250101-task/ai-review.md ← パスのハードコード - .takt/runs/20250101-task/reports/ai-review.md ← パスのハードコード
``` ```
--- ---

View File

@ -157,7 +157,7 @@
1. **ポリシーの詳細ルール**: コード例・判定基準・例外リスト等の詳細はポリシーの責務1行の行動指針は行動姿勢に記載してよい 1. **ポリシーの詳細ルール**: コード例・判定基準・例外リスト等の詳細はポリシーの責務1行の行動指針は行動姿勢に記載してよい
2. **ピース固有の概念**: ムーブメント名、レポートファイル名、ステップ間ルーティング 2. **ピース固有の概念**: ムーブメント名、レポートファイル名、ステップ間ルーティング
3. **ツール固有の環境情報**: `.takt/reports/` 等のディレクトリパス、テンプレート変数(`{report_dir}` 等) 3. **ツール固有の環境情報**: `.takt/runs/` 等のディレクトリパス、テンプレート変数(`{report_dir}` 等)
4. **実行手順**: 「まず〜を読み、次に〜を実行」のような手順はinstruction_templateの責務 4. **実行手順**: 「まず〜を読み、次に〜を実行」のような手順はinstruction_templateの責務
### 例外: ドメイン知識としての重複 ### 例外: ドメイン知識としての重複

View File

@ -100,7 +100,7 @@
1. **特定エージェント固有の知識**: Architecture Reviewer だけが使う検出手法等 1. **特定エージェント固有の知識**: Architecture Reviewer だけが使う検出手法等
2. **ピース固有の概念**: ムーブメント名、レポートファイル名 2. **ピース固有の概念**: ムーブメント名、レポートファイル名
3. **ツール固有のパス**: `.takt/reports/` 等の具体的なディレクトリパス 3. **ツール固有のパス**: `.takt/runs/` 等の具体的なディレクトリパス
4. **実行手順**: どのファイルを読め、何を実行しろ等 4. **実行手順**: どのファイルを読め、何を実行しろ等
--- ---

View File

@ -6,6 +6,18 @@ piece_categories:
- coding - coding
- minimal - minimal
- compound-eye - compound-eye
🎨 フロントエンド:
pieces:
- frontend
⚙️ バックエンド: {}
🔧 エキスパート:
フルスタック:
pieces:
- expert
- expert-cqrs
🛠️ リファクタリング:
pieces:
- structural-reform
🔍 レビュー: 🔍 レビュー:
pieces: pieces:
- review-fix-minimal - review-fix-minimal
@ -14,16 +26,6 @@ piece_categories:
pieces: pieces:
- unit-test - unit-test
- e2e-test - e2e-test
🎨 フロントエンド: {}
⚙️ バックエンド: {}
🔧 エキスパート:
フルスタック:
pieces:
- expert
- expert-cqrs
リファクタリング:
pieces:
- structural-reform
その他: その他:
pieces: pieces:
- research - research

View File

@ -1,6 +1,6 @@
name: coding name: coding
description: Lightweight development piece with planning and parallel reviews (plan -> implement -> parallel review -> complete) description: Lightweight development piece with planning and parallel reviews (plan -> implement -> parallel review -> complete)
max_iterations: 20 max_movements: 20
initial_movement: plan initial_movement: plan
movements: movements:
- name: plan - name: plan

View File

@ -1,6 +1,6 @@
name: compound-eye name: compound-eye
description: 複眼レビュー - 同じ指示を Claude と Codex に同時に投げ、両者の回答を統合する description: 複眼レビュー - 同じ指示を Claude と Codex に同時に投げ、両者の回答を統合する
max_iterations: 10 max_movements: 10
initial_movement: evaluate initial_movement: evaluate
movements: movements:

View File

@ -1,6 +1,6 @@
name: default name: default
description: Standard development piece with planning and specialized reviews description: Standard development piece with planning and specialized reviews
max_iterations: 30 max_movements: 30
initial_movement: plan initial_movement: plan
loop_monitors: loop_monitors:
- cycle: - cycle:

View File

@ -1,6 +1,6 @@
name: e2e-test name: e2e-test
description: E2Eテスト追加に特化したピースE2E分析→E2E実装→レビュー→修正 description: E2Eテスト追加に特化したピースE2E分析→E2E実装→レビュー→修正
max_iterations: 20 max_movements: 20
initial_movement: plan_test initial_movement: plan_test
loop_monitors: loop_monitors:
- cycle: - cycle:

View File

@ -1,6 +1,6 @@
name: expert-cqrs name: expert-cqrs
description: CQRS+ES・フロントエンド・セキュリティ・QA専門家レビュー description: CQRS+ES・フロントエンド・セキュリティ・QA専門家レビュー
max_iterations: 30 max_movements: 30
initial_movement: plan initial_movement: plan
movements: movements:
- name: plan - name: plan
@ -26,7 +26,6 @@ movements:
- name: implement - name: implement
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing
@ -87,7 +86,6 @@ movements:
- name: ai_fix - name: ai_fix
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing
@ -218,7 +216,6 @@ movements:
- name: fix - name: fix
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing
@ -267,7 +264,6 @@ movements:
- name: fix_supervisor - name: fix_supervisor
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing

View File

@ -1,6 +1,6 @@
name: expert name: expert
description: アーキテクチャ・フロントエンド・セキュリティ・QA専門家レビュー description: アーキテクチャ・フロントエンド・セキュリティ・QA専門家レビュー
max_iterations: 30 max_movements: 30
initial_movement: plan initial_movement: plan
movements: movements:
- name: plan - name: plan
@ -26,7 +26,6 @@ movements:
- name: implement - name: implement
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing
@ -86,7 +85,6 @@ movements:
- name: ai_fix - name: ai_fix
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing
@ -216,7 +214,6 @@ movements:
- name: fix - name: fix
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing
@ -264,7 +261,6 @@ movements:
- name: fix_supervisor - name: fix_supervisor
edit: true edit: true
persona: coder persona: coder
pass_previous_response: false
policy: policy:
- coding - coding
- testing - testing

View File

@ -0,0 +1,282 @@
name: frontend
description: フロントエンド・セキュリティ・QA専門家レビュー
max_movements: 30
initial_movement: plan
movements:
- name: plan
edit: false
persona: planner
allowed_tools:
- Read
- Glob
- Grep
- Bash
- WebSearch
- WebFetch
instruction: plan
rules:
- condition: タスク分析と計画が完了した
next: implement
- condition: 要件が不明確で計画を立てられない
next: ABORT
output_contracts:
report:
- name: 00-plan.md
format: plan
- name: implement
edit: true
persona: coder
policy:
- coding
- testing
session: refresh
knowledge:
- frontend
- security
- architecture
allowed_tools:
- Read
- Glob
- Grep
- Edit
- Write
- Bash
- WebSearch
- WebFetch
instruction: implement
rules:
- condition: 実装が完了した
next: ai_review
- condition: 実装未着手(レポートのみ)
next: ai_review
- condition: 実装を進行できない
next: ai_review
- condition: ユーザー入力が必要
next: implement
requires_user_input: true
interactive_only: true
output_contracts:
report:
- Scope: 01-coder-scope.md
- Decisions: 02-coder-decisions.md
- name: ai_review
edit: false
persona: ai-antipattern-reviewer
policy:
- review
- ai-antipattern
allowed_tools:
- Read
- Glob
- Grep
- WebSearch
- WebFetch
instruction: ai-review
rules:
- condition: AI特有の問題が見つからない
next: reviewers
- condition: AI特有の問題が検出された
next: ai_fix
output_contracts:
report:
- name: 03-ai-review.md
format: ai-review
- name: ai_fix
edit: true
persona: coder
policy:
- coding
- testing
session: refresh
knowledge:
- frontend
- security
- architecture
allowed_tools:
- Read
- Glob
- Grep
- Edit
- Write
- Bash
- WebSearch
- WebFetch
instruction: ai-fix
rules:
- condition: AI Reviewerの指摘に対する修正が完了した
next: ai_review
- condition: 修正不要(指摘対象ファイル/仕様の確認済み)
next: ai_no_fix
- condition: 修正を進行できない
next: ai_no_fix
- name: ai_no_fix
edit: false
persona: architecture-reviewer
policy: review
allowed_tools:
- Read
- Glob
- Grep
rules:
- condition: ai_reviewの指摘が妥当修正すべき
next: ai_fix
- condition: ai_fixの判断が妥当修正不要
next: reviewers
instruction: arbitrate
- name: reviewers
parallel:
- name: arch-review
edit: false
persona: architecture-reviewer
policy: review
knowledge:
- architecture
- frontend
allowed_tools:
- Read
- Glob
- Grep
- WebSearch
- WebFetch
rules:
- condition: approved
- condition: needs_fix
instruction: review-arch
output_contracts:
report:
- name: 04-architect-review.md
format: architecture-review
- name: frontend-review
edit: false
persona: frontend-reviewer
policy: review
knowledge: frontend
allowed_tools:
- Read
- Glob
- Grep
- WebSearch
- WebFetch
rules:
- condition: approved
- condition: needs_fix
instruction: review-frontend
output_contracts:
report:
- name: 05-frontend-review.md
format: frontend-review
- name: security-review
edit: false
persona: security-reviewer
policy: review
knowledge: security
allowed_tools:
- Read
- Glob
- Grep
- WebSearch
- WebFetch
rules:
- condition: approved
- condition: needs_fix
instruction: review-security
output_contracts:
report:
- name: 06-security-review.md
format: security-review
- name: qa-review
edit: false
persona: qa-reviewer
policy:
- review
- qa
allowed_tools:
- Read
- Glob
- Grep
- WebSearch
- WebFetch
rules:
- condition: approved
- condition: needs_fix
instruction: review-qa
output_contracts:
report:
- name: 07-qa-review.md
format: qa-review
rules:
- condition: all("approved")
next: supervise
- condition: any("needs_fix")
next: fix
- name: fix
edit: true
persona: coder
policy:
- coding
- testing
knowledge:
- frontend
- security
- architecture
allowed_tools:
- Read
- Glob
- Grep
- Edit
- Write
- Bash
- WebSearch
- WebFetch
permission_mode: edit
rules:
- condition: 修正が完了した
next: reviewers
- condition: 修正を進行できない
next: plan
instruction: fix
- name: supervise
edit: false
persona: expert-supervisor
policy: review
allowed_tools:
- Read
- Glob
- Grep
- WebSearch
- WebFetch
instruction: supervise
rules:
- condition: すべての検証が完了し、マージ可能な状態である
next: COMPLETE
- condition: 問題が検出された
next: fix_supervisor
output_contracts:
report:
- Validation: 08-supervisor-validation.md
- Summary: summary.md
- name: fix_supervisor
edit: true
persona: coder
policy:
- coding
- testing
knowledge:
- frontend
- security
- architecture
allowed_tools:
- Read
- Glob
- Grep
- Edit
- Write
- Bash
- WebSearch
- WebFetch
instruction: fix-supervisor
rules:
- condition: 監督者の指摘に対する修正が完了した
next: supervise
- condition: 修正を進行できない
next: plan

View File

@ -1,6 +1,6 @@
name: magi name: magi
description: MAGI合議システム - 3つの観点から分析し多数決で判定 description: MAGI合議システム - 3つの観点から分析し多数決で判定
max_iterations: 5 max_movements: 5
initial_movement: melchior initial_movement: melchior
movements: movements:
- name: melchior - name: melchior

View File

@ -1,6 +1,6 @@
name: minimal name: minimal
description: Minimal development piece (implement -> parallel review -> fix if needed -> complete) description: Minimal development piece (implement -> parallel review -> fix if needed -> complete)
max_iterations: 20 max_movements: 20
initial_movement: implement initial_movement: implement
movements: movements:
- name: implement - name: implement

View File

@ -1,6 +1,6 @@
name: passthrough name: passthrough
description: Single-agent thin wrapper. Pass task directly to coder as-is. description: Single-agent thin wrapper. Pass task directly to coder as-is.
max_iterations: 10 max_movements: 10
initial_movement: execute initial_movement: execute
movements: movements:
- name: execute - name: execute

View File

@ -1,6 +1,6 @@
name: research name: research
description: 調査ピース - 質問せずに自律的に調査を実行 description: 調査ピース - 質問せずに自律的に調査を実行
max_iterations: 10 max_movements: 10
initial_movement: plan initial_movement: plan
movements: movements:
- name: plan - name: plan
@ -13,7 +13,7 @@ movements:
- WebFetch - WebFetch
instruction_template: | instruction_template: |
## ピース状況 ## ピース状況
- イテレーション: {iteration}/{max_iterations}(ピース全体) - イテレーション: {iteration}/{max_movements}(ピース全体)
- ムーブメント実行回数: {movement_iteration}(このムーブメントの実行回数) - ムーブメント実行回数: {movement_iteration}(このムーブメントの実行回数)
- ムーブメント: plan - ムーブメント: plan
@ -48,7 +48,7 @@ movements:
- WebFetch - WebFetch
instruction_template: | instruction_template: |
## ピース状況 ## ピース状況
- イテレーション: {iteration}/{max_iterations}(ピース全体) - イテレーション: {iteration}/{max_movements}(ピース全体)
- ムーブメント実行回数: {movement_iteration}(このムーブメントの実行回数) - ムーブメント実行回数: {movement_iteration}(このムーブメントの実行回数)
- ムーブメント: dig - ムーブメント: dig
@ -88,7 +88,7 @@ movements:
- WebFetch - WebFetch
instruction_template: | instruction_template: |
## ピース状況 ## ピース状況
- イテレーション: {iteration}/{max_iterations}(ピース全体) - イテレーション: {iteration}/{max_movements}(ピース全体)
- ムーブメント実行回数: {movement_iteration}(このムーブメントの実行回数) - ムーブメント実行回数: {movement_iteration}(このムーブメントの実行回数)
- ムーブメント: supervise (調査品質評価) - ムーブメント: supervise (調査品質評価)

View File

@ -1,6 +1,6 @@
name: review-fix-minimal name: review-fix-minimal
description: 既存コードのレビューと修正ピース(レビュー開始、実装なし) description: 既存コードのレビューと修正ピース(レビュー開始、実装なし)
max_iterations: 20 max_movements: 20
initial_movement: reviewers initial_movement: reviewers
movements: movements:
- name: implement - name: implement

View File

@ -1,6 +1,6 @@
name: review-only name: review-only
description: レビュー専用ピース - コードをレビューするだけで編集は行わない description: レビュー専用ピース - コードをレビューするだけで編集は行わない
max_iterations: 10 max_movements: 10
initial_movement: plan initial_movement: plan
movements: movements:
- name: plan - name: plan

View File

@ -1,6 +1,6 @@
name: structural-reform name: structural-reform
description: プロジェクト全体レビューと構造改革 - 段階的なファイル分割による反復的コードベース再構築 description: プロジェクト全体レビューと構造改革 - 段階的なファイル分割による反復的コードベース再構築
max_iterations: 50 max_movements: 50
initial_movement: review initial_movement: review
loop_monitors: loop_monitors:
- cycle: - cycle:
@ -44,7 +44,7 @@ movements:
- WebFetch - WebFetch
instruction_template: | instruction_template: |
## ピースステータス ## ピースステータス
- イテレーション: {iteration}/{max_iterations}(ピース全体) - イテレーション: {iteration}/{max_movements}(ピース全体)
- ムーブメントイテレーション: {movement_iteration}(このムーブメントの実行回数) - ムーブメントイテレーション: {movement_iteration}(このムーブメントの実行回数)
- ムーブメント: reviewプロジェクト全体レビュー - ムーブメント: reviewプロジェクト全体レビュー
@ -126,7 +126,7 @@ movements:
- WebFetch - WebFetch
instruction_template: | instruction_template: |
## ピースステータス ## ピースステータス
- イテレーション: {iteration}/{max_iterations}(ピース全体) - イテレーション: {iteration}/{max_movements}(ピース全体)
- ムーブメントイテレーション: {movement_iteration}(このムーブメントの実行回数) - ムーブメントイテレーション: {movement_iteration}(このムーブメントの実行回数)
- ムーブメント: plan_reform改革計画策定 - ムーブメント: plan_reform改革計画策定
@ -323,7 +323,7 @@ movements:
- WebFetch - WebFetch
instruction_template: | instruction_template: |
## ピースステータス ## ピースステータス
- イテレーション: {iteration}/{max_iterations}(ピース全体) - イテレーション: {iteration}/{max_movements}(ピース全体)
- ムーブメントイテレーション: {movement_iteration}(このムーブメントの実行回数) - ムーブメントイテレーション: {movement_iteration}(このムーブメントの実行回数)
- ムーブメント: verifyビルド・テスト検証 - ムーブメント: verifyビルド・テスト検証
@ -378,7 +378,7 @@ movements:
- WebFetch - WebFetch
instruction_template: | instruction_template: |
## ピースステータス ## ピースステータス
- イテレーション: {iteration}/{max_iterations}(ピース全体) - イテレーション: {iteration}/{max_movements}(ピース全体)
- ムーブメントイテレーション: {movement_iteration}(このムーブメントの実行回数) - ムーブメントイテレーション: {movement_iteration}(このムーブメントの実行回数)
- ムーブメント: next_target進捗確認と次ターゲット選択 - ムーブメント: next_target進捗確認と次ターゲット選択

View File

@ -1,6 +1,6 @@
name: unit-test name: unit-test
description: 単体テスト追加に特化したピース(テスト分析→テスト実装→レビュー→修正) description: 単体テスト追加に特化したピース(テスト分析→テスト実装→レビュー→修正)
max_iterations: 20 max_movements: 20
initial_movement: plan_test initial_movement: plan_test
loop_monitors: loop_monitors:
- cycle: - cycle:

View File

@ -1,6 +1,6 @@
# Temporary files # Temporary files
logs/ logs/
reports/ runs/
completed/ completed/
tasks/ tasks/
worktrees/ worktrees/

View File

@ -1,37 +1,48 @@
TAKT Task File Format TAKT Task Directory Format
===================== ==========================
Tasks placed in this directory (.takt/tasks/) will be processed by TAKT. `.takt/tasks/` is the task input directory. Each task uses one subdirectory.
## YAML Format (Recommended) ## Directory Layout (Recommended)
# .takt/tasks/my-task.yaml .takt/
task: "Task description" tasks/
worktree: true # (optional) true | "/path/to/dir" 20260201-015714-foptng/
branch: "feat/my-feature" # (optional) branch name order.md
piece: "default" # (optional) piece name schema.sql
wireframe.png
- Directory name should match the report directory slug.
- `order.md` is required.
- Other files are optional reference materials.
## tasks.yaml Format
Store task metadata in `.takt/tasks.yaml`, and point to the task directory with `task_dir`.
tasks:
- name: add-auth-feature
status: pending
task_dir: .takt/tasks/20260201-015714-foptng
piece: default
created_at: "2026-02-01T01:57:14.000Z"
started_at: null
completed_at: null
Fields: Fields:
task (required) Task description (string) task_dir (recommended) Path to task directory that contains `order.md`
worktree (optional) true: create shared clone, "/path": clone at path content (legacy) Inline task text (kept for compatibility)
branch (optional) Branch name (auto-generated if omitted: takt/{timestamp}-{slug}) content_file (legacy) Path to task text file (kept for compatibility)
piece (optional) Piece name (uses current piece if omitted)
## Markdown Format (Simple) ## Command Behavior
# .takt/tasks/my-task.md - `takt add` creates `.takt/tasks/{slug}/order.md` automatically.
- `takt run` and `takt watch` read `.takt/tasks.yaml` and resolve `task_dir`.
Entire file content becomes the task description. - Report output is written to `.takt/runs/{slug}/reports/`.
Supports multiline. No structured options available.
## Supported Extensions
.yaml, .yml -> YAML format (parsed and validated)
.md -> Markdown format (plain text, backward compatible)
## Commands ## Commands
takt /add-task Add a task interactively takt add Add a task and create task directory
takt /run-tasks Run all pending tasks takt run Run all pending tasks in tasks.yaml
takt /watch Watch and auto-run tasks takt watch Watch tasks.yaml and run pending tasks
takt /list-tasks List task branches (merge/delete) takt list List task branches (merge/delete)

View File

@ -83,7 +83,7 @@ $ARGUMENTS を以下のように解析する:
3. 見つからない場合: 上記2ディレクトリを Glob で列挙し、AskUserQuestion で選択させる 3. 見つからない場合: 上記2ディレクトリを Glob で列挙し、AskUserQuestion で選択させる
YAMLから以下を抽出する→ references/yaml-schema.md 参照): YAMLから以下を抽出する→ references/yaml-schema.md 参照):
- `name`, `max_iterations`, `initial_movement`, `movements` 配列 - `name`, `max_movements`, `initial_movement`, `movements` 配列
- セクションマップ: `personas`, `policies`, `instructions`, `output_contracts`, `knowledge` - セクションマップ: `personas`, `policies`, `instructions`, `output_contracts`, `knowledge`
### 手順 2: セクションリソースの事前読み込み ### 手順 2: セクションリソースの事前読み込み
@ -116,13 +116,21 @@ TeamCreate tool を呼ぶ:
- `permission_mode = コマンドで解析された権限モード("bypassPermissions" / "acceptEdits" / "default"` - `permission_mode = コマンドで解析された権限モード("bypassPermissions" / "acceptEdits" / "default"`
- `movement_history = []`遷移履歴。Loop Monitor 用) - `movement_history = []`遷移履歴。Loop Monitor 用)
**レポートディレクトリ**: いずれかの movement に `report` フィールドがある場合、`.takt/reports/{YYYYMMDD-HHmmss}-{slug}/` を作成し、パスを `report_dir` 変数に保持する。 **実行ディレクトリ**: いずれかの movement に `report` フィールドがある場合、`.takt/runs/{YYYYMMDD-HHmmss}-{slug}/` を作成し、以下を配置する。
- `reports/`(レポート出力)
- `context/knowledge/`Knowledge スナップショット)
- `context/policy/`Policy スナップショット)
- `context/previous_responses/`Previous Response 履歴 + `latest.md`
- `logs/`(実行ログ)
- `meta.json`run メタデータ)
レポート出力先パスを `report_dir` 変数(`.takt/runs/{slug}/reports`)として保持する。
次に **手順 5** に進む。 次に **手順 5** に進む。
### 手順 5: チームメイト起動 ### 手順 5: チームメイト起動
**iteration が max_iterations を超えていたら → 手順 8ABORT: イテレーション上限)に進む。** **iteration が max_movements を超えていたら → 手順 8ABORT: イテレーション上限)に進む。**
current_movement のプロンプトを構築する(→ references/engine.md のプロンプト構築を参照)。 current_movement のプロンプトを構築する(→ references/engine.md のプロンプト構築を参照)。

View File

@ -133,7 +133,7 @@ movement の `instruction:` キーから指示テンプレートファイルを
- ワーキングディレクトリ: {cwd} - ワーキングディレクトリ: {cwd}
- ピース: {piece_name} - ピース: {piece_name}
- Movement: {movement_name} - Movement: {movement_name}
- イテレーション: {iteration} / {max_iterations} - イテレーション: {iteration} / {max_movements}
- Movement イテレーション: {movement_iteration} 回目 - Movement イテレーション: {movement_iteration} 回目
``` ```
@ -146,9 +146,9 @@ movement の `instruction:` キーから指示テンプレートファイルを
| `{task}` | ユーザーが入力したタスク内容 | | `{task}` | ユーザーが入力したタスク内容 |
| `{previous_response}` | 前の movement のチームメイト出力 | | `{previous_response}` | 前の movement のチームメイト出力 |
| `{iteration}` | ピース全体のイテレーション数1始まり | | `{iteration}` | ピース全体のイテレーション数1始まり |
| `{max_iterations}` | ピースの max_iterations 値 | | `{max_movements}` | ピースの max_movements 値 |
| `{movement_iteration}` | この movement が実行された回数1始まり | | `{movement_iteration}` | この movement が実行された回数1始まり |
| `{report_dir}` | レポートディレクトリパス | | `{report_dir}` | レポートディレクトリパス`.takt/runs/{slug}/reports` |
| `{report:ファイル名}` | 指定レポートファイルの内容Read で取得) | | `{report:ファイル名}` | 指定レポートファイルの内容Read で取得) |
### {report:ファイル名} の処理 ### {report:ファイル名} の処理
@ -212,7 +212,10 @@ report:
チームメイトの出力からレポート内容を抽出し、Write tool でレポートディレクトリに保存する。 チームメイトの出力からレポート内容を抽出し、Write tool でレポートディレクトリに保存する。
**この作業は Team Leadあなたが行う。** チームメイトの出力を受け取った後に実施する。 **この作業は Team Leadあなたが行う。** チームメイトの出力を受け取った後に実施する。
**レポートディレクトリ**: `.takt/reports/{timestamp}-{slug}/` に作成する。 **実行ディレクトリ**: `.takt/runs/{timestamp}-{slug}/` に作成する。
- レポートは `.takt/runs/{timestamp}-{slug}/reports/` に保存する。
- `Knowledge` / `Policy` / `Previous Response``.takt/runs/{timestamp}-{slug}/context/` 配下に保存する。
- 最新の previous response は `.takt/runs/{timestamp}-{slug}/context/previous_responses/latest.md` とする。
- `{timestamp}`: `YYYYMMDD-HHmmss` 形式 - `{timestamp}`: `YYYYMMDD-HHmmss` 形式
- `{slug}`: タスク内容の先頭30文字をスラグ化 - `{slug}`: タスク内容の先頭30文字をスラグ化
@ -314,7 +317,7 @@ parallel のサブステップにも同様にタグ出力指示を注入する
### 基本ルール ### 基本ルール
- 同じ movement が連続3回以上実行されたら警告を表示する - 同じ movement が連続3回以上実行されたら警告を表示する
- `max_iterations` に到達したら強制終了ABORTする - `max_movements` に到達したら強制終了ABORTする
### カウンター管理 ### カウンター管理
@ -358,17 +361,24 @@ loop_monitors:
d. judge の出力を judge の `rules` で評価する d. judge の出力を judge の `rules` で評価する
e. マッチした rule の `next` に遷移する(通常のルール評価をオーバーライドする) e. マッチした rule の `next` に遷移する(通常のルール評価をオーバーライドする)
## レポート管理 ## 実行アーティファクト管理
### レポートディレクトリの作成 ### 実行ディレクトリの作成
ピース実行開始時にレポートディレクトリを作成する: ピース実行開始時に実行ディレクトリを作成する:
``` ```
.takt/reports/{YYYYMMDD-HHmmss}-{slug}/ .takt/runs/{YYYYMMDD-HHmmss}-{slug}/
reports/
context/
knowledge/
policy/
previous_responses/
logs/
meta.json
``` ```
このパスを `{report_dir}` 変数として全 movement から参照可能にする。 このうち `reports/`パスを `{report_dir}` 変数として全 movement から参照可能にする。
### レポートの保存 ### レポートの保存
@ -392,7 +402,7 @@ loop_monitors:
TeamCreate でチーム作成 TeamCreate でチーム作成
レポートディレクトリ作成 実行ディレクトリ作成
initial_movement を取得 initial_movement を取得

View File

@ -7,7 +7,7 @@
```yaml ```yaml
name: piece-name # ピース名(必須) name: piece-name # ピース名(必須)
description: 説明テキスト # ピースの説明(任意) description: 説明テキスト # ピースの説明(任意)
max_iterations: 10 # 最大イテレーション数(必須) max_movements: 10 # 最大イテレーション数(必須)
initial_movement: plan # 最初に実行する movement 名(必須) initial_movement: plan # 最初に実行する movement 名(必須)
# セクションマップ(キー → ファイルパスの対応表) # セクションマップ(キー → ファイルパスの対応表)
@ -192,7 +192,7 @@ quality_gates:
| `{task}` | ユーザーのタスク入力template に含まれない場合は自動追加) | | `{task}` | ユーザーのタスク入力template に含まれない場合は自動追加) |
| `{previous_response}` | 前の movement の出力pass_previous_response: true 時、自動追加) | | `{previous_response}` | 前の movement の出力pass_previous_response: true 時、自動追加) |
| `{iteration}` | ピース全体のイテレーション数 | | `{iteration}` | ピース全体のイテレーション数 |
| `{max_iterations}` | 最大イテレーション数 | | `{max_movements}` | 最大イテレーション数 |
| `{movement_iteration}` | この movement の実行回数 | | `{movement_iteration}` | この movement の実行回数 |
| `{report_dir}` | レポートディレクトリ名 | | `{report_dir}` | レポートディレクトリ名 |
| `{report:ファイル名}` | 指定レポートファイルの内容を展開 | | `{report:ファイル名}` | 指定レポートファイルの内容を展開 |

View File

@ -2,7 +2,7 @@
**T**ask **A**gent **K**oordination **T**ool - AIエージェントの協調手順・人の介入ポイント・記録をYAMLで定義する **T**ask **A**gent **K**oordination **T**ool - AIエージェントの協調手順・人の介入ポイント・記録をYAMLで定義する
TAKTは複数のAIエージェントClaude Code、CodexをYAMLで定義されたワークフローに従って実行します。各ステップで誰が実行し、何を見て、何を許可し、失敗時にどうするかはピースファイルに宣言され、エージェント任せにしません。 TAKTは複数のAIエージェントClaude Code、Codex、OpenCodeをYAMLで定義されたワークフローに従って実行します。各ステップで誰が実行し、何を見て、何を許可し、失敗時にどうするかはピースファイルに宣言され、エージェント任せにしません。
TAKTはTAKT自身で開発されていますドッグフーディング TAKTはTAKT自身で開発されていますドッグフーディング
@ -45,14 +45,14 @@ TAKTはエージェントの実行を**制御**し、プロンプトの構成要
次のいずれかを選択してください。 次のいずれかを選択してください。
- **プロバイダーCLIを使用**: [Claude Code](https://docs.anthropic.com/en/docs/claude-code) または [Codex](https://github.com/openai/codex) をインストール - **プロバイダーCLIを使用**: [Claude Code](https://docs.anthropic.com/en/docs/claude-code)[Codex](https://github.com/openai/codex)、または [OpenCode](https://opencode.ai) をインストール
- **API直接利用**: **Anthropic API Key** または **OpenAI API Key**CLI不要 - **API直接利用**: **Anthropic API Key**、**OpenAI API Key**、または **OpenCode API Key**CLI不要
追加で必要なもの: 追加で必要なもの:
- [GitHub CLI](https://cli.github.com/) (`gh`) — `takt #N`GitHub Issue実行を使う場合のみ必要 - [GitHub CLI](https://cli.github.com/) (`gh`) — `takt #N`GitHub Issue実行を使う場合のみ必要
**料金について**: API Key を使用する場合、TAKT は Claude APIAnthropicまたは OpenAI API を直接呼び出します。料金体系は Claude Code や Codex を使った場合と同じです。特に CI/CD で自動実行する場合、API 使用量が増えるため、コストに注意してください。 **料金について**: API Key を使用する場合、TAKT は Claude APIAnthropic、OpenAI API、または OpenCode API を直接呼び出します。料金体系は各 CLI ツールを使った場合と同じです。特に CI/CD で自動実行する場合、API 使用量が増えるため、コストに注意してください。
## インストール ## インストール
@ -186,7 +186,7 @@ takt #6 --auto-pr
### タスク管理add / run / watch / list ### タスク管理add / run / watch / list
タスクファイル(`.takt/tasks/`を使ったバッチ処理。複数のタスクを積んでおいて、後でまとめて実行する使い方に便利です。 `.takt/tasks.yaml``.takt/tasks/{slug}/` を使ったバッチ処理。複数のタスクを積んでおいて、後でまとめて実行する使い方に便利です。
#### タスクを追加(`takt add` #### タスクを追加(`takt add`
@ -201,14 +201,14 @@ takt add #28
#### タスクを実行(`takt run` #### タスクを実行(`takt run`
```bash ```bash
# .takt/tasks/ の保留中タスクをすべて実行 # .takt/tasks.yaml の保留中タスクをすべて実行
takt run takt run
``` ```
#### タスクを監視(`takt watch` #### タスクを監視(`takt watch`
```bash ```bash
# .takt/tasks/ を監視してタスクを自動実行(常駐プロセス) # .takt/tasks.yaml を監視してタスクを自動実行(常駐プロセス)
takt watch takt watch
``` ```
@ -225,6 +225,13 @@ takt list --non-interactive --action delete --branch takt/my-branch --yes
takt list --non-interactive --format json takt list --non-interactive --format json
``` ```
#### タスクディレクトリ運用(作成・実行・確認)
1. `takt add` を実行して `.takt/tasks.yaml` に pending レコードが作られることを確認する。
2. 生成された `.takt/tasks/{slug}/order.md` を開き、必要なら仕様や参考資料を追記する。
3. `takt run`(または `takt watch`)で `tasks.yaml` の pending タスクを実行する。
4. `task_dir` と同じスラッグの `.takt/runs/{slug}/reports/` を確認する。
### パイプラインモードCI/自動化向け) ### パイプラインモードCI/自動化向け)
`--pipeline` を指定すると非対話のパイプラインモードに入ります。ブランチ作成 → ピース実行 → commit & push を自動で行います。CI/CD での自動化に適しています。 `--pipeline` を指定すると非対話のパイプラインモードに入ります。ブランチ作成 → ピース実行 → commit & push を自動で行います。CI/CD での自動化に適しています。
@ -315,7 +322,7 @@ takt reset categories
| `--repo <owner/repo>` | リポジトリ指定PR作成時 | | `--repo <owner/repo>` | リポジトリ指定PR作成時 |
| `--create-worktree <yes\|no>` | worktree確認プロンプトをスキップ | | `--create-worktree <yes\|no>` | worktree確認プロンプトをスキップ |
| `-q, --quiet` | 最小限の出力モード: AIの出力を抑制CI向け | | `-q, --quiet` | 最小限の出力モード: AIの出力を抑制CI向け |
| `--provider <name>` | エージェントプロバイダーを上書きclaude\|codex\|mock | | `--provider <name>` | エージェントプロバイダーを上書きclaude\|codex\|opencode\|mock |
| `--model <name>` | エージェントモデルを上書き | | `--model <name>` | エージェントモデルを上書き |
## ピース ## ピース
@ -328,7 +335,7 @@ TAKTはYAMLベースのピース定義とルールベースルーティングを
```yaml ```yaml
name: default name: default
max_iterations: 10 max_movements: 10
initial_movement: plan initial_movement: plan
# セクションマップ — キー: ファイルパスこのYAMLからの相対パス # セクションマップ — キー: ファイルパスこのYAMLからの相対パス
@ -466,6 +473,7 @@ TAKTには複数のビルトインピースが同梱されています:
| `structural-reform` | プロジェクト全体の構造改革: 段階的なファイル分割を伴う反復的なコードベース再構成。 | | `structural-reform` | プロジェクト全体の構造改革: 段階的なファイル分割を伴う反復的なコードベース再構成。 |
| `unit-test` | ユニットテスト重視ピース: テスト分析 → テスト実装 → レビュー → 修正。 | | `unit-test` | ユニットテスト重視ピース: テスト分析 → テスト実装 → レビュー → 修正。 |
| `e2e-test` | E2Eテスト重視ピース: E2E分析 → E2E実装 → レビュー → 修正VitestベースのE2Eフロー。 | | `e2e-test` | E2Eテスト重視ピース: E2E分析 → E2E実装 → レビュー → 修正VitestベースのE2Eフロー。 |
| `frontend` | フロントエンド特化開発ピース: React/Next.js 向けのレビューとナレッジ注入。 |
**ペルソナ別プロバイダー設定:** 設定ファイルの `persona_providers` で、特定のペルソナを異なるプロバイダーにルーティングできます(例: coder は Codex、レビュアーは Claude。ピースを複製する必要はありません。 **ペルソナ別プロバイダー設定:** 設定ファイルの `persona_providers` で、特定のペルソナを異なるプロバイダーにルーティングできます(例: coder は Codex、レビュアーは Claude。ピースを複製する必要はありません。
@ -532,14 +540,14 @@ Claude Code はエイリアス(`opus`、`sonnet`、`haiku`、`opusplan`、`def
.takt/ # プロジェクトレベルの設定 .takt/ # プロジェクトレベルの設定
├── config.yaml # プロジェクト設定(現在のピース等) ├── config.yaml # プロジェクト設定(現在のピース等)
├── tasks/ # 保留中のタスクファイル(.yaml, .md ├── tasks/ # タスク入力ディレクトリ(.takt/tasks/{slug}/order.md など
├── completed/ # 完了したタスクとレポート ├── tasks.yaml # 保留中タスクのメタデータtask_dir, piece, worktree など)
├── reports/ # 実行レポート(自動生成) └── runs/ # 実行単位の成果物
│ └── {timestamp}-{slug}/ └── {slug}/
└── logs/ # NDJSON 形式のセッションログ ├── reports/ # 実行レポート(自動生成)
├── latest.json # 現在/最新セッションへのポインタ ├── context/ # knowledge/policy/previous_response のスナップショット
├── previous.json # 前回セッションへのポインタ ├── logs/ # この実行専用の NDJSON セッションログ
└── {sessionId}.jsonl # ピース実行ごとの NDJSON セッションログ └── meta.json # run メタデータ
``` ```
ビルトインリソースはnpmパッケージ`builtins/`)に埋め込まれています。`~/.takt/` のユーザーファイルが優先されます。 ビルトインリソースはnpmパッケージ`builtins/`)に埋め込まれています。`~/.takt/` のユーザーファイルが優先されます。
@ -553,11 +561,17 @@ Claude Code はエイリアス(`opus`、`sonnet`、`haiku`、`opusplan`、`def
language: ja language: ja
default_piece: default default_piece: default
log_level: info log_level: info
provider: claude # デフォルトプロバイダー: claude または codex provider: claude # デフォルトプロバイダー: claude、codex、または opencode
model: sonnet # デフォルトモデル(オプション) model: sonnet # デフォルトモデル(オプション)
branch_name_strategy: romaji # ブランチ名生成: 'romaji'(高速)または 'ai'(低速) branch_name_strategy: romaji # ブランチ名生成: 'romaji'(高速)または 'ai'(低速)
prevent_sleep: false # macOS の実行中スリープ防止caffeinate prevent_sleep: false # macOS の実行中スリープ防止caffeinate
notification_sound: true # 通知音の有効/無効 notification_sound: true # 通知音の有効/無効
notification_sound_events: # タイミング別の通知音制御
iteration_limit: false
piece_complete: true
piece_abort: true
run_complete: true # 未設定時は有効。false を指定すると無効
run_abort: true # 未設定時は有効。false を指定すると無効
concurrency: 1 # takt run の並列タスク数1-10、デフォルト: 1 = 逐次実行) concurrency: 1 # takt run の並列タスク数1-10、デフォルト: 1 = 逐次実行)
task_poll_interval_ms: 500 # takt run 中の新タスク検出ポーリング間隔100-5000、デフォルト: 500 task_poll_interval_ms: 500 # takt run 中の新タスク検出ポーリング間隔100-5000、デフォルト: 500
interactive_preview_movements: 3 # 対話モードでのムーブメントプレビュー数0-10、デフォルト: 3 interactive_preview_movements: 3 # 対話モードでのムーブメントプレビュー数0-10、デフォルト: 3
@ -569,9 +583,10 @@ interactive_preview_movements: 3 # 対話モードでのムーブメントプ
# ai-antipattern-reviewer: claude # レビュアーは Claude のまま # ai-antipattern-reviewer: claude # レビュアーは Claude のまま
# API Key 設定(オプション) # API Key 設定(オプション)
# 環境変数 TAKT_ANTHROPIC_API_KEY / TAKT_OPENAI_API_KEY で上書き可能 # 環境変数 TAKT_ANTHROPIC_API_KEY / TAKT_OPENAI_API_KEY / TAKT_OPENCODE_API_KEY で上書き可能
anthropic_api_key: sk-ant-... # Claude (Anthropic) を使う場合 anthropic_api_key: sk-ant-... # Claude (Anthropic) を使う場合
# openai_api_key: sk-... # Codex (OpenAI) を使う場合 # openai_api_key: sk-... # Codex (OpenAI) を使う場合
# opencode_api_key: ... # OpenCode を使う場合
# ビルトインピースのフィルタリング(オプション) # ビルトインピースのフィルタリング(オプション)
# builtin_pieces_enabled: true # false でビルトイン全体を無効化 # builtin_pieces_enabled: true # false でビルトイン全体を無効化
@ -595,17 +610,17 @@ anthropic_api_key: sk-ant-... # Claude (Anthropic) を使う場合
1. **環境変数で設定**: 1. **環境変数で設定**:
```bash ```bash
export TAKT_ANTHROPIC_API_KEY=sk-ant-... # Claude の場合 export TAKT_ANTHROPIC_API_KEY=sk-ant-... # Claude の場合
# または
export TAKT_OPENAI_API_KEY=sk-... # Codex の場合 export TAKT_OPENAI_API_KEY=sk-... # Codex の場合
export TAKT_OPENCODE_API_KEY=... # OpenCode の場合
``` ```
2. **設定ファイルで設定**: 2. **設定ファイルで設定**:
上記の `~/.takt/config.yaml``anthropic_api_key` または `openai_api_key` を記述 上記の `~/.takt/config.yaml``anthropic_api_key``openai_api_key`、または `opencode_api_key` を記述
優先順位: 環境変数 > `config.yaml` の設定 優先順位: 環境変数 > `config.yaml` の設定
**注意事項:** **注意事項:**
- API Key を設定した場合、Claude Code や Codex のインストールは不要です。TAKT が直接 Anthropic API または OpenAI API を呼び出します。 - API Key を設定した場合、Claude Code、Codex、OpenCode のインストールは不要です。TAKT が直接各 API を呼び出します。
- **セキュリティ**: `config.yaml` に API Key を記述した場合、このファイルを Git にコミットしないよう注意してください。環境変数での設定を使うか、`.gitignore``~/.takt/config.yaml` を追加することを検討してください。 - **セキュリティ**: `config.yaml` に API Key を記述した場合、このファイルを Git にコミットしないよう注意してください。環境変数での設定を使うか、`.gitignore``~/.takt/config.yaml` を追加することを検討してください。
**パイプラインテンプレート変数:** **パイプラインテンプレート変数:**
@ -621,36 +636,43 @@ anthropic_api_key: sk-ant-... # Claude (Anthropic) を使う場合
1. ピースのムーブメントの `model`(最優先) 1. ピースのムーブメントの `model`(最優先)
2. カスタムエージェントの `model` 2. カスタムエージェントの `model`
3. グローバル設定の `model` 3. グローバル設定の `model`
4. プロバイダーデフォルトClaude: sonnet、Codex: codex 4. プロバイダーデフォルトClaude: sonnet、Codex: codex、OpenCode: プロバイダーデフォルト
## 詳細ガイド ## 詳細ガイド
### タスクファイルの形式 ### タスクディレクトリ形式
TAKT は `.takt/tasks/` 内のタスクファイルによるバッチ処理をサポートしています。`.yaml`/`.yml``.md` の両方のファイル形式に対応しています。 TAKT は `.takt/tasks.yaml` にタスクのメタデータを保存し、長文仕様は `.takt/tasks/{slug}/` に分離して管理します。
**YAML形式**推奨、worktree/branch/pieceオプション対応: **推奨構成**:
```text
.takt/
tasks/
20260201-015714-foptng/
order.md
schema.sql
wireframe.png
tasks.yaml
runs/
20260201-015714-foptng/
reports/
```
**tasks.yaml レコード例**:
```yaml ```yaml
# .takt/tasks/add-auth.yaml tasks:
task: "認証機能を追加する" - name: add-auth-feature
worktree: true # 隔離された共有クローンで実行 status: pending
branch: "feat/add-auth" # ブランチ名(省略時は自動生成) task_dir: .takt/tasks/20260201-015714-foptng
piece: "default" # ピース指定(省略時は現在のもの) piece: default
created_at: "2026-02-01T01:57:14.000Z"
started_at: null
completed_at: null
``` ```
**Markdown形式**(シンプル、後方互換): `takt add``.takt/tasks/{slug}/order.md` を自動生成し、`tasks.yaml` には `task_dir` を保存します。
```markdown
# .takt/tasks/add-login-feature.md
アプリケーションにログイン機能を追加する。
要件:
- ユーザー名とパスワードフィールド
- フォームバリデーション
- 失敗時のエラーハンドリング
```
#### 共有クローンによる隔離実行 #### 共有クローンによる隔離実行
@ -667,15 +689,14 @@ YAMLタスクファイルで`worktree`を指定すると、各タスクを`git c
### セッションログ ### セッションログ
TAKTはセッションログをNDJSON`.jsonl`)形式で`.takt/logs/`に書き込みます。各レコードはアトミックに追記されるため、プロセスが途中でクラッシュしても部分的なログが保持され、`tail -f`でリアルタイムに追跡できます。 TAKTはセッションログをNDJSON`.jsonl`)形式で`.takt/runs/{slug}/logs/`に書き込みます。各レコードはアトミックに追記されるため、プロセスが途中でクラッシュしても部分的なログが保持され、`tail -f`でリアルタイムに追跡できます。
- `.takt/logs/latest.json` - 現在(または最新の)セッションへのポインタ - `.takt/runs/{slug}/logs/{sessionId}.jsonl` - ピース実行ごとのNDJSONセッションログ
- `.takt/logs/previous.json` - 前回セッションへのポインタ - `.takt/runs/{slug}/meta.json` - run メタデータ(`task`, `piece`, `start/end`, `status` など)
- `.takt/logs/{sessionId}.jsonl` - ピース実行ごとのNDJSONセッションログ
レコード種別: `piece_start`, `step_start`, `step_complete`, `piece_complete`, `piece_abort` レコード種別: `piece_start`, `step_start`, `step_complete`, `piece_complete`, `piece_abort`
エージェントは`previous.json`を読み取って前回の実行コンテキストを引き継ぐことができます。セッション継続は自動的に行われます — `takt "タスク"`を実行するだけで前回のセッションから続行されます。 最新の previous response は `.takt/runs/{slug}/context/previous_responses/latest.md` に保存され、実行時に自動的に引き継がれます。
### カスタムピースの追加 ### カスタムピースの追加
@ -690,7 +711,7 @@ takt eject default
# ~/.takt/pieces/my-piece.yaml # ~/.takt/pieces/my-piece.yaml
name: my-piece name: my-piece
description: カスタムピース description: カスタムピース
max_iterations: 5 max_movements: 5
initial_movement: analyze initial_movement: analyze
personas: personas:
@ -740,11 +761,11 @@ personas:
|------|------| |------|------|
| `{task}` | 元のユーザーリクエスト(テンプレートになければ自動注入) | | `{task}` | 元のユーザーリクエスト(テンプレートになければ自動注入) |
| `{iteration}` | ピース全体のターン数(実行された全ムーブメント数) | | `{iteration}` | ピース全体のターン数(実行された全ムーブメント数) |
| `{max_iterations}` | 最大イテレーション数 | | `{max_movements}` | 最大イテレーション数 |
| `{movement_iteration}` | ムーブメントごとのイテレーション数(このムーブメントが実行された回数) | | `{movement_iteration}` | ムーブメントごとのイテレーション数(このムーブメントが実行された回数) |
| `{previous_response}` | 前のムーブメントの出力(テンプレートになければ自動注入) | | `{previous_response}` | 前のムーブメントの出力(テンプレートになければ自動注入) |
| `{user_inputs}` | ピース中の追加ユーザー入力(テンプレートになければ自動注入) | | `{user_inputs}` | ピース中の追加ユーザー入力(テンプレートになければ自動注入) |
| `{report_dir}` | レポートディレクトリパス(例: `.takt/reports/20250126-143052-task-summary` | | `{report_dir}` | レポートディレクトリパス(例: `.takt/runs/20250126-143052-task-summary/reports` |
| `{report:filename}` | `{report_dir}/filename` に展開(例: `{report:00-plan.md}` | | `{report:filename}` | `{report_dir}/filename` に展開(例: `{report:00-plan.md}` |
### ピースの設計 ### ピースの設計
@ -777,7 +798,7 @@ rules:
| `edit` | - | ムーブメントがプロジェクトファイルを編集できるか(`true`/`false` | | `edit` | - | ムーブメントがプロジェクトファイルを編集できるか(`true`/`false` |
| `pass_previous_response` | `true` | 前のムーブメントの出力を`{previous_response}`に渡す | | `pass_previous_response` | `true` | 前のムーブメントの出力を`{previous_response}`に渡す |
| `allowed_tools` | - | エージェントが使用できるツール一覧Read, Glob, Grep, Edit, Write, Bash等 | | `allowed_tools` | - | エージェントが使用できるツール一覧Read, Glob, Grep, Edit, Write, Bash等 |
| `provider` | - | このムーブメントのプロバイダーを上書き(`claude`または`codex` | | `provider` | - | このムーブメントのプロバイダーを上書き(`claude``codex`または`opencode` |
| `model` | - | このムーブメントのモデルを上書き | | `model` | - | このムーブメントのモデルを上書き |
| `permission_mode` | - | パーミッションモード: `readonly``edit``full`(プロバイダー非依存) | | `permission_mode` | - | パーミッションモード: `readonly``edit``full`(プロバイダー非依存) |
| `output_contracts` | - | レポートファイルの出力契約定義 | | `output_contracts` | - | レポートファイルの出力契約定義 |
@ -855,7 +876,7 @@ npm install -g takt
takt --pipeline --task "バグ修正" --auto-pr --repo owner/repo takt --pipeline --task "バグ修正" --auto-pr --repo owner/repo
``` ```
認証には `TAKT_ANTHROPIC_API_KEY` または `TAKT_OPENAI_API_KEY` 環境変数を設定してくださいTAKT 独自のプレフィックス付き)。 認証には `TAKT_ANTHROPIC_API_KEY``TAKT_OPENAI_API_KEY`、または `TAKT_OPENCODE_API_KEY` 環境変数を設定してくださいTAKT 独自のプレフィックス付き)。
```bash ```bash
# Claude (Anthropic) を使う場合 # Claude (Anthropic) を使う場合
@ -863,6 +884,9 @@ export TAKT_ANTHROPIC_API_KEY=sk-ant-...
# Codex (OpenAI) を使う場合 # Codex (OpenAI) を使う場合
export TAKT_OPENAI_API_KEY=sk-... export TAKT_OPENAI_API_KEY=sk-...
# OpenCode を使う場合
export TAKT_OPENCODE_API_KEY=...
``` ```
## ドキュメント ## ドキュメント

View File

@ -431,7 +431,7 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
2. **ログ初期化**: 2. **ログ初期化**:
- `createSessionLog()`: セッションログオブジェクト作成 - `createSessionLog()`: セッションログオブジェクト作成
- `initNdjsonLog()`: NDJSON形式のログファイル初期化 - `initNdjsonLog()`: NDJSON形式のログファイル初期化
- `updateLatestPointer()`: `latest.json` ポインタ更新 - `meta.json` 更新: 実行ステータスrunning/completed/abortedと時刻を保存
3. **PieceEngine初期化**: 3. **PieceEngine初期化**:
```typescript ```typescript
@ -498,7 +498,7 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
while (state.status === 'running') { while (state.status === 'running') {
// 1. Abort & Iteration チェック // 1. Abort & Iteration チェック
if (abortRequested) { ... } if (abortRequested) { ... }
if (iteration >= maxIterations) { ... } if (iteration >= maxMovements) { ... }
// 2. ステップ取得 // 2. ステップ取得
const step = getStep(state.currentStep); const step = getStep(state.currentStep);
@ -619,6 +619,7 @@ const match = await detectMatchedRule(step, response.content, tagContent, {...})
- Step Iteration (per-step) - Step Iteration (per-step)
- Step name - Step name
- Report Directory/File info - Report Directory/File info
- Run Source Paths (`.takt/runs/{slug}/context/...`)
3. **User Request** (タスク本文): 3. **User Request** (タスク本文):
- `{task}` プレースホルダーがテンプレートにない場合のみ自動注入 - `{task}` プレースホルダーがテンプレートにない場合のみ自動注入
@ -626,6 +627,8 @@ const match = await detectMatchedRule(step, response.content, tagContent, {...})
4. **Previous Response** (前ステップの出力): 4. **Previous Response** (前ステップの出力):
- `step.passPreviousResponse === true` かつ - `step.passPreviousResponse === true` かつ
- `{previous_response}` プレースホルダーがテンプレートにない場合のみ自動注入 - `{previous_response}` プレースホルダーがテンプレートにない場合のみ自動注入
- 長さ制御2000 chars`...TRUNCATED...` を適用
- Source Path を常時注入
5. **Additional User Inputs** (blocked時の追加入力): 5. **Additional User Inputs** (blocked時の追加入力):
- `{user_inputs}` プレースホルダーがテンプレートにない場合のみ自動注入 - `{user_inputs}` プレースホルダーがテンプレートにない場合のみ自動注入
@ -643,7 +646,7 @@ const match = await detectMatchedRule(step, response.content, tagContent, {...})
- `{previous_response}`: 前ステップの出力 - `{previous_response}`: 前ステップの出力
- `{user_inputs}`: 追加ユーザー入力 - `{user_inputs}`: 追加ユーザー入力
- `{iteration}`: ピース全体のイテレーション - `{iteration}`: ピース全体のイテレーション
- `{max_iterations}`: 最大イテレーション - `{max_movements}`: 最大イテレーション
- `{step_iteration}`: ステップのイテレーション - `{step_iteration}`: ステップのイテレーション
- `{report_dir}`: レポートディレクトリ - `{report_dir}`: レポートディレクトリ
@ -821,7 +824,7 @@ new PieceEngine(pieceConfig, cwd, task, {
1. **コンテキスト収集**: 1. **コンテキスト収集**:
- `task`: 元のユーザーリクエスト - `task`: 元のユーザーリクエスト
- `iteration`, `maxIterations`: イテレーション情報 - `iteration`, `maxMovements`: イテレーション情報
- `stepIteration`: ステップごとの実行回数 - `stepIteration`: ステップごとの実行回数
- `cwd`, `projectCwd`: ディレクトリ情報 - `cwd`, `projectCwd`: ディレクトリ情報
- `userInputs`: blocked時の追加入力 - `userInputs`: blocked時の追加入力

View File

@ -331,7 +331,7 @@ Faceted Promptingの中核メカニズムは**宣言的な合成**である。
```yaml ```yaml
name: my-workflow name: my-workflow
max_iterations: 10 max_movements: 10
initial_movement: plan initial_movement: plan
movements: movements:

View File

@ -331,7 +331,7 @@ Key properties:
```yaml ```yaml
name: my-workflow name: my-workflow
max_iterations: 10 max_movements: 10
initial_movement: plan initial_movement: plan
movements: movements:

View File

@ -25,7 +25,7 @@ A piece is a YAML file that defines a sequence of steps executed by AI agents. E
```yaml ```yaml
name: my-piece name: my-piece
description: Optional description description: Optional description
max_iterations: 10 max_movements: 10
initial_step: first-step # Optional, defaults to first step initial_step: first-step # Optional, defaults to first step
steps: steps:
@ -55,11 +55,11 @@ steps:
|----------|-------------| |----------|-------------|
| `{task}` | Original user request (auto-injected if not in template) | | `{task}` | Original user request (auto-injected if not in template) |
| `{iteration}` | Piece-wide turn count (total steps executed) | | `{iteration}` | Piece-wide turn count (total steps executed) |
| `{max_iterations}` | Maximum iterations allowed | | `{max_movements}` | Maximum movements allowed |
| `{step_iteration}` | Per-step iteration count (how many times THIS step has run) | | `{step_iteration}` | Per-step iteration count (how many times THIS step has run) |
| `{previous_response}` | Previous step's output (auto-injected if not in template) | | `{previous_response}` | Previous step's output (auto-injected if not in template) |
| `{user_inputs}` | Additional user inputs during piece (auto-injected if not in template) | | `{user_inputs}` | Additional user inputs during piece (auto-injected if not in template) |
| `{report_dir}` | Report directory path (e.g., `.takt/reports/20250126-143052-task-summary`) | | `{report_dir}` | Report directory path (e.g., `.takt/runs/20250126-143052-task-summary/reports`) |
| `{report:filename}` | Resolves to `{report_dir}/filename` (e.g., `{report:00-plan.md}`) | | `{report:filename}` | Resolves to `{report_dir}/filename` (e.g., `{report:00-plan.md}`) |
> **Note**: `{task}`, `{previous_response}`, and `{user_inputs}` are auto-injected into instructions. You only need explicit placeholders if you want to control their position in the template. > **Note**: `{task}`, `{previous_response}`, and `{user_inputs}` are auto-injected into instructions. You only need explicit placeholders if you want to control their position in the template.
@ -170,7 +170,7 @@ report:
```yaml ```yaml
name: simple-impl name: simple-impl
max_iterations: 5 max_movements: 5
steps: steps:
- name: implement - name: implement
@ -191,7 +191,7 @@ steps:
```yaml ```yaml
name: with-review name: with-review
max_iterations: 10 max_movements: 10
steps: steps:
- name: implement - name: implement

View File

@ -5,7 +5,8 @@ E2Eテストを追加・変更した場合は、このドキュメントも更
## 前提条件 ## 前提条件
- `gh` CLI が利用可能で、対象GitHubアカウントでログイン済みであること。 - `gh` CLI が利用可能で、対象GitHubアカウントでログイン済みであること。
- `takt-testing` リポジトリが対象アカウントに存在することE2Eがクローンして使用 - `takt-testing` リポジトリが対象アカウントに存在することE2Eがクローンして使用
- 必要に応じて `TAKT_E2E_PROVIDER` を設定すること(例: `claude` / `codex`)。 - 必要に応じて `TAKT_E2E_PROVIDER` を設定すること(例: `claude` / `codex` / `opencode`)。
- `TAKT_E2E_PROVIDER=opencode` の場合は `TAKT_E2E_MODEL` が必須(例: `opencode/big-pickle`)。
- 実行時間が長いテストがあるため、タイムアウトに注意すること。 - 実行時間が長いテストがあるため、タイムアウトに注意すること。
- E2Eは `e2e/helpers/test-repo.ts``gh` でリポジトリをクローンし、テンポラリディレクトリで実行する。 - E2Eは `e2e/helpers/test-repo.ts``gh` でリポジトリをクローンし、テンポラリディレクトリで実行する。
- 対話UIを避けるため、E2E環境では `TAKT_NO_TTY=1` を設定してTTYを無効化する。 - 対話UIを避けるため、E2E環境では `TAKT_NO_TTY=1` を設定してTTYを無効化する。
@ -13,26 +14,35 @@ E2Eテストを追加・変更した場合は、このドキュメントも更
- リポジトリクローン: `$(os.tmpdir())/takt-e2e-repo-<random>/` - リポジトリクローン: `$(os.tmpdir())/takt-e2e-repo-<random>/`
- 実行環境: `$(os.tmpdir())/takt-e2e-<runId>-<random>/` - 実行環境: `$(os.tmpdir())/takt-e2e-<runId>-<random>/`
## E2E用config.yaml
- E2Eのグローバル設定は `e2e/fixtures/config.e2e.yaml` を基準に生成する。
- `createIsolatedEnv()` は毎回一時ディレクトリ配下(`$TAKT_CONFIG_DIR/config.yaml`)にこの基準設定を書き出す。
- 通知音は `notification_sound_events` でタイミング別に制御し、E2E既定では道中`iteration_limit` / `piece_complete` / `piece_abort`をOFF、全体終了時`run_complete` / `run_abort`のみONにする。
- 各スペックで `provider``concurrency` を変更する場合は、`updateIsolatedConfig()` を使って差分のみ上書きする。
- `~/.takt/config.yaml` はE2Eでは参照されないため、通常実行の設定には影響しない。
## 実行コマンド ## 実行コマンド
- `npm run test:e2e`: E2E全体を実行。 - `npm run test:e2e`: E2E全体を実行。
- `npm run test:e2e:mock`: mock固定のE2Eのみ実行。 - `npm run test:e2e:mock`: mock固定のE2Eのみ実行。
- `npm run test:e2e:provider`: `claude``codex` の両方で実行。 - `npm run test:e2e:provider`: `claude``codex` の両方で実行。
- `npm run test:e2e:provider:claude`: `TAKT_E2E_PROVIDER=claude` で実行。 - `npm run test:e2e:provider:claude`: `TAKT_E2E_PROVIDER=claude` で実行。
- `npm run test:e2e:provider:codex`: `TAKT_E2E_PROVIDER=codex` で実行。 - `npm run test:e2e:provider:codex`: `TAKT_E2E_PROVIDER=codex` で実行。
- `npm run test:e2e:provider:opencode`: `TAKT_E2E_PROVIDER=opencode` で実行(`TAKT_E2E_MODEL` 必須)。
- `npm run test:e2e:all`: `mock` + `provider` を通しで実行。 - `npm run test:e2e:all`: `mock` + `provider` を通しで実行。
- `npm run test:e2e:claude`: `test:e2e:provider:claude` の別名。 - `npm run test:e2e:claude`: `test:e2e:provider:claude` の別名。
- `npm run test:e2e:codex`: `test:e2e:provider:codex` の別名。 - `npm run test:e2e:codex`: `test:e2e:provider:codex` の別名。
- `npm run test:e2e:opencode`: `test:e2e:provider:opencode` の別名。
- `npx vitest run e2e/specs/add-and-run.e2e.ts`: 単体実行の例。 - `npx vitest run e2e/specs/add-and-run.e2e.ts`: 単体実行の例。
## シナリオ一覧 ## シナリオ一覧
- Add task and run`e2e/specs/add-and-run.e2e.ts` - Add task and run`e2e/specs/add-and-run.e2e.ts`
- 目的: `.takt/tasks/` にタスクYAMLを配置し、`takt run` が実行できることを確認。 - 目的: `.takt/tasks.yaml` に pending タスクを配置し、`takt run` が実行できることを確認。
- LLM: 条件付き(`TAKT_E2E_PROVIDER``claude` / `codex` の場合に呼び出す) - LLM: 条件付き(`TAKT_E2E_PROVIDER``claude` / `codex` の場合に呼び出す)
- 手順(ユーザー行動/コマンド): - 手順(ユーザー行動/コマンド):
- `.takt/tasks/e2e-test-task.yaml` にタスクを作成(`piece``e2e/fixtures/pieces/simple.yaml` を指定)。 - `.takt/tasks.yaml` にタスクを作成(`piece``e2e/fixtures/pieces/simple.yaml` を指定)。
- `takt run` を実行する。 - `takt run` を実行する。
- `README.md` に行が追加されることを確認する。 - `README.md` に行が追加されることを確認する。
- タスクファイルが `tasks/` から移動されることを確認する。 - 実行後にタスクが `tasks.yaml` から消えることを確認する。
- Worktree/Clone isolation`e2e/specs/worktree.e2e.ts` - Worktree/Clone isolation`e2e/specs/worktree.e2e.ts`
- 目的: `--create-worktree yes` 指定で隔離環境に実行されることを確認。 - 目的: `--create-worktree yes` 指定で隔離環境に実行されることを確認。
- LLM: 条件付き(`TAKT_E2E_PROVIDER``claude` / `codex` の場合に呼び出す) - LLM: 条件付き(`TAKT_E2E_PROVIDER``claude` / `codex` の場合に呼び出す)
@ -83,13 +93,13 @@ E2Eテストを追加・変更した場合は、このドキュメントも更
- `gh issue create ...` でIssueを作成する。 - `gh issue create ...` でIssueを作成する。
- `TAKT_MOCK_SCENARIO=e2e/fixtures/scenarios/add-task.json` を設定する。 - `TAKT_MOCK_SCENARIO=e2e/fixtures/scenarios/add-task.json` を設定する。
- `takt add '#<issue>'` を実行し、`Create worktree?``n` で回答する。 - `takt add '#<issue>'` を実行し、`Create worktree?``n` で回答する。
- `.takt/tasks/` にYAMLが生成されることを確認する。 - `.takt/tasks.yaml` に `task_dir` が保存され、`.takt/tasks/{slug}/order.md` が生成されることを確認する。
- Watch tasks`e2e/specs/watch.e2e.ts` - Watch tasks`e2e/specs/watch.e2e.ts`
- 目的: `takt watch` が監視中に追加されたタスクを実行できることを確認。 - 目的: `takt watch` が監視中に追加されたタスクを実行できることを確認。
- LLM: 呼び出さない(`--provider mock` 固定) - LLM: 呼び出さない(`--provider mock` 固定)
- 手順(ユーザー行動/コマンド): - 手順(ユーザー行動/コマンド):
- `takt watch --provider mock` を起動する。 - `takt watch --provider mock` を起動する。
- `.takt/tasks/` にタスクYAMLを追加する(`piece``e2e/fixtures/pieces/mock-single-step.yaml` を指定)。 - `.takt/tasks.yaml` に pending タスクを追加する(`piece``e2e/fixtures/pieces/mock-single-step.yaml` を指定)。
- 出力に `Task "watch-task" completed` が含まれることを確認する。 - 出力に `Task "watch-task" completed` が含まれることを確認する。
- `Ctrl+C` で終了する。 - `Ctrl+C` で終了する。
- Run tasks graceful shutdown on SIGINT`e2e/specs/run-sigint-graceful.e2e.ts` - Run tasks graceful shutdown on SIGINT`e2e/specs/run-sigint-graceful.e2e.ts`
@ -111,3 +121,27 @@ E2Eテストを追加・変更した場合は、このドキュメントも更
- `takt list --non-interactive --action diff --branch <branch>` で差分統計が出力されることを確認する。 - `takt list --non-interactive --action diff --branch <branch>` で差分統計が出力されることを確認する。
- `takt list --non-interactive --action try --branch <branch>` で変更がステージされることを確認する。 - `takt list --non-interactive --action try --branch <branch>` で変更がステージされることを確認する。
- `takt list --non-interactive --action merge --branch <branch>` でブランチがマージされ削除されることを確認する。 - `takt list --non-interactive --action merge --branch <branch>` でブランチがマージされ削除されることを確認する。
- Config permission mode`e2e/specs/cli-config.e2e.ts`
- 目的: `takt config` でパーミッションモードの切り替えと永続化を確認。
- LLM: 呼び出さないLLM不使用の操作のみ
- 手順(ユーザー行動/コマンド):
- `takt config default` を実行し、`Switched to: default` が出力されることを確認する。
- `takt config sacrifice-my-pc` を実行し、`Switched to: sacrifice-my-pc` が出力されることを確認する。
- `takt config sacrifice-my-pc` 実行後、`.takt/config.yaml``permissionMode: sacrifice-my-pc` が保存されていることを確認する。
- `takt config invalid-mode` を実行し、`Invalid mode` が出力されることを確認する。
- Reset categories`e2e/specs/cli-reset-categories.e2e.ts`
- 目的: `takt reset categories` でカテゴリオーバーレイのリセットを確認。
- LLM: 呼び出さないLLM不使用の操作のみ
- 手順(ユーザー行動/コマンド):
- `takt reset categories` を実行する。
- 出力に `reset` を含むことを確認する。
- `$TAKT_CONFIG_DIR/preferences/piece-categories.yaml` が存在し `piece_categories: {}` を含むことを確認する。
- Export Claude Code Skill`e2e/specs/cli-export-cc.e2e.ts`
- 目的: `takt export-cc` でClaude Code Skillのデプロイを確認。
- LLM: 呼び出さないLLM不使用の操作のみ
- 手順(ユーザー行動/コマンド):
- `HOME` を一時ディレクトリに設定する。
- `takt export-cc` を実行する。
- 出力に `ファイルをデプロイしました` を含むことを確認する。
- `$HOME/.claude/skills/takt/SKILL.md` が存在することを確認する。
- `$HOME/.claude/skills/takt/pieces/` および `$HOME/.claude/skills/takt/personas/` ディレクトリが存在し、それぞれ少なくとも1ファイルを含むことを確認する。

View File

@ -0,0 +1,11 @@
provider: claude
language: en
log_level: info
default_piece: default
notification_sound: true
notification_sound_events:
iteration_limit: false
piece_complete: false
piece_abort: false
run_complete: true
run_abort: true

View File

@ -0,0 +1,5 @@
name: broken
this is not valid YAML
- indentation: [wrong
movements:
broken: {{{

View File

@ -0,0 +1,27 @@
name: e2e-mock-max-iter
description: Piece with max_movements=2 that loops between two steps
max_movements: 2
initial_movement: step-a
movements:
- name: step-a
edit: true
persona: ../agents/test-coder.md
permission_mode: edit
instruction_template: |
{task}
rules:
- condition: Done
next: step-b
- name: step-b
edit: true
persona: ../agents/test-coder.md
permission_mode: edit
instruction_template: |
Continue the task.
rules:
- condition: Done
next: step-a

View File

@ -0,0 +1,15 @@
name: e2e-mock-no-match
description: Piece with a strict rule condition that will not match mock output
max_movements: 3
movements:
- name: execute
edit: true
persona: ../agents/test-coder.md
permission_mode: edit
instruction_template: |
{task}
rules:
- condition: SpecificMatchThatWillNotOccur
next: COMPLETE

View File

@ -1,7 +1,7 @@
name: e2e-mock-single name: e2e-mock-single
description: Minimal mock-only piece for CLI E2E description: Minimal mock-only piece for CLI E2E
max_iterations: 3 max_movements: 3
movements: movements:
- name: execute - name: execute

View File

@ -1,7 +1,7 @@
name: e2e-mock-slow-multi-step name: e2e-mock-slow-multi-step
description: Multi-step mock piece to keep tasks in-flight long enough for SIGINT E2E description: Multi-step mock piece to keep tasks in-flight long enough for SIGINT E2E
max_iterations: 20 max_movements: 20
initial_movement: step-1 initial_movement: step-1

View File

@ -0,0 +1,27 @@
name: e2e-mock-two-step
description: Two-step sequential piece for E2E testing
max_movements: 5
initial_movement: step-1
movements:
- name: step-1
edit: true
persona: ../agents/test-coder.md
permission_mode: edit
instruction_template: |
{task}
rules:
- condition: Done
next: step-2
- name: step-2
edit: true
persona: ../agents/test-coder.md
permission_mode: edit
instruction_template: |
Continue the task.
rules:
- condition: Done
next: COMPLETE

View File

@ -1,7 +1,7 @@
name: e2e-multi-step-parallel name: e2e-multi-step-parallel
description: Multi-step piece with parallel sub-movements for E2E testing description: Multi-step piece with parallel sub-movements for E2E testing
max_iterations: 10 max_movements: 10
initial_movement: plan initial_movement: plan

View File

@ -1,7 +1,7 @@
name: e2e-report-judge name: e2e-report-judge
description: E2E piece that exercises report + judge phases description: E2E piece that exercises report + judge phases
max_iterations: 3 max_movements: 3
movements: movements:
- name: execute - name: execute

View File

@ -1,7 +1,7 @@
name: e2e-simple name: e2e-simple
description: Minimal E2E test piece description: Minimal E2E test piece
max_iterations: 5 max_movements: 5
movements: movements:
- name: execute - name: execute

View File

@ -0,0 +1,18 @@
[
{
"status": "done",
"content": "Step A output."
},
{
"status": "done",
"content": "Step B output."
},
{
"status": "done",
"content": "Step A output again."
},
{
"status": "done",
"content": "Step B output again."
}
]

View File

@ -0,0 +1,6 @@
[
{
"status": "error",
"content": "Simulated failure: API error during execution"
}
]

View File

@ -0,0 +1,6 @@
[
{
"status": "done",
"content": "Only entry in scenario."
}
]

View File

@ -0,0 +1,14 @@
[
{
"status": "done",
"content": "Task 1 completed successfully."
},
{
"status": "done",
"content": "Task 2 completed successfully."
},
{
"status": "done",
"content": "Task 3 completed successfully."
}
]

View File

@ -0,0 +1,14 @@
[
{
"status": "done",
"content": "Task 1 completed successfully."
},
{
"status": "error",
"content": "Task 2 encountered an error."
},
{
"status": "done",
"content": "Task 3 completed successfully."
}
]

View File

@ -0,0 +1,10 @@
[
{
"status": "done",
"content": "Step 1 output text completed."
},
{
"status": "done",
"content": "Step 2 output text completed."
}
]

View File

@ -1,6 +1,8 @@
import { mkdtempSync, mkdirSync, writeFileSync, rmSync } from 'node:fs'; import { mkdtempSync, mkdirSync, readFileSync, writeFileSync, rmSync } from 'node:fs';
import { join } from 'node:path'; import { dirname, join, resolve } from 'node:path';
import { tmpdir } from 'node:os'; import { tmpdir } from 'node:os';
import { fileURLToPath } from 'node:url';
import { parse as parseYaml, stringify as stringifyYaml } from 'yaml';
export interface IsolatedEnv { export interface IsolatedEnv {
runId: string; runId: string;
@ -9,6 +11,73 @@ export interface IsolatedEnv {
cleanup: () => void; cleanup: () => void;
} }
type E2EConfig = Record<string, unknown>;
type NotificationSoundEvents = Record<string, unknown>;
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const E2E_CONFIG_FIXTURE_PATH = resolve(__dirname, '../fixtures/config.e2e.yaml');
function readE2EFixtureConfig(): E2EConfig {
const raw = readFileSync(E2E_CONFIG_FIXTURE_PATH, 'utf-8');
const parsed = parseYaml(raw);
if (!parsed || typeof parsed !== 'object' || Array.isArray(parsed)) {
throw new Error(`Invalid E2E config fixture: ${E2E_CONFIG_FIXTURE_PATH}`);
}
return parsed as E2EConfig;
}
function writeConfigFile(taktDir: string, config: E2EConfig): void {
writeFileSync(join(taktDir, 'config.yaml'), `${stringifyYaml(config)}`);
}
function parseNotificationSoundEvents(
source: E2EConfig,
sourceName: string,
): NotificationSoundEvents | undefined {
const value = source.notification_sound_events;
if (value === undefined) {
return undefined;
}
if (!value || typeof value !== 'object' || Array.isArray(value)) {
throw new Error(
`Invalid notification_sound_events in ${sourceName}: expected object`,
);
}
return value as NotificationSoundEvents;
}
function mergeIsolatedConfig(
fixture: E2EConfig,
current: E2EConfig,
patch: E2EConfig,
): E2EConfig {
const merged: E2EConfig = { ...fixture, ...current, ...patch };
const fixtureEvents = parseNotificationSoundEvents(fixture, 'fixture');
const currentEvents = parseNotificationSoundEvents(current, 'current config');
const patchEvents = parseNotificationSoundEvents(patch, 'patch');
if (!fixtureEvents && !currentEvents && !patchEvents) {
return merged;
}
merged.notification_sound_events = {
...(fixtureEvents ?? {}),
...(currentEvents ?? {}),
...(patchEvents ?? {}),
};
return merged;
}
export function updateIsolatedConfig(taktDir: string, patch: E2EConfig): void {
const current = readE2EFixtureConfig();
const configPath = join(taktDir, 'config.yaml');
const raw = readFileSync(configPath, 'utf-8');
const parsed = parseYaml(raw);
if (!parsed || typeof parsed !== 'object' || Array.isArray(parsed)) {
throw new Error(`Invalid isolated config: ${configPath}`);
}
writeConfigFile(taktDir, mergeIsolatedConfig(current, parsed as E2EConfig, patch));
}
/** /**
* Create an isolated environment for E2E testing. * Create an isolated environment for E2E testing.
* *
@ -24,18 +93,21 @@ export function createIsolatedEnv(): IsolatedEnv {
const gitConfigPath = join(baseDir, '.gitconfig'); const gitConfigPath = join(baseDir, '.gitconfig');
// Create TAKT config directory and config.yaml // Create TAKT config directory and config.yaml
// Use TAKT_E2E_PROVIDER to match config provider with the actual provider being tested
const configProvider = process.env.TAKT_E2E_PROVIDER ?? 'claude';
mkdirSync(taktDir, { recursive: true }); mkdirSync(taktDir, { recursive: true });
writeFileSync( const baseConfig = readE2EFixtureConfig();
join(taktDir, 'config.yaml'), const provider = process.env.TAKT_E2E_PROVIDER;
[ const model = process.env.TAKT_E2E_MODEL;
`provider: ${configProvider}`, if (provider === 'opencode' && !model) {
'language: en', throw new Error('TAKT_E2E_PROVIDER=opencode requires TAKT_E2E_MODEL (e.g. opencode/big-pickle)');
'log_level: info', }
'default_piece: default', const config = provider
].join('\n'), ? {
); ...baseConfig,
provider,
...(provider === 'opencode' && model ? { model } : {}),
}
: baseConfig;
writeConfigFile(taktDir, config);
// Create isolated Git config file // Create isolated Git config file
writeFileSync( writeFileSync(
@ -58,11 +130,7 @@ export function createIsolatedEnv(): IsolatedEnv {
taktDir, taktDir,
env, env,
cleanup: () => { cleanup: () => {
try { rmSync(baseDir, { recursive: true, force: true });
rmSync(baseDir, { recursive: true, force: true });
} catch {
// Best-effort cleanup; ignore errors (e.g., already deleted)
}
}, },
}; };
} }

View File

@ -74,10 +74,10 @@ describe('E2E: Add task and run (takt add → takt run)', () => {
const readme = readFileSync(readmePath, 'utf-8'); const readme = readFileSync(readmePath, 'utf-8');
expect(readme).toContain('E2E test passed'); expect(readme).toContain('E2E test passed');
// Verify task status became completed // Verify completed task was removed from tasks.yaml
const tasksRaw = readFileSync(tasksFile, 'utf-8'); const tasksRaw = readFileSync(tasksFile, 'utf-8');
const parsed = parseYaml(tasksRaw) as { tasks?: Array<{ name?: string; status?: string }> }; const parsed = parseYaml(tasksRaw) as { tasks?: Array<{ name?: string; status?: string }> };
const executed = parsed.tasks?.find((task) => task.name === 'e2e-test-task'); const executed = parsed.tasks?.find((task) => task.name === 'e2e-test-task');
expect(executed?.status).toBe('completed'); expect(executed).toBeUndefined();
}, 240_000); }, 240_000);
}); });

View File

@ -1,10 +1,14 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest'; import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { execFileSync } from 'node:child_process'; import { execFileSync } from 'node:child_process';
import { readFileSync, writeFileSync } from 'node:fs'; import { readFileSync, existsSync } from 'node:fs';
import { join, dirname, resolve } from 'node:path'; import { join, dirname, resolve } from 'node:path';
import { fileURLToPath } from 'node:url'; import { fileURLToPath } from 'node:url';
import { parse as parseYaml } from 'yaml'; import { parse as parseYaml } from 'yaml';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env'; import {
createIsolatedEnv,
updateIsolatedConfig,
type IsolatedEnv,
} from '../helpers/isolated-env';
import { createTestRepo, type TestRepo } from '../helpers/test-repo'; import { createTestRepo, type TestRepo } from '../helpers/test-repo';
import { runTakt } from '../helpers/takt-runner'; import { runTakt } from '../helpers/takt-runner';
@ -22,16 +26,10 @@ describe('E2E: Add task from GitHub issue (takt add)', () => {
testRepo = createTestRepo(); testRepo = createTestRepo();
// Use mock provider to stabilize summarizer // Use mock provider to stabilize summarizer
writeFileSync( updateIsolatedConfig(isolatedEnv.taktDir, {
join(isolatedEnv.taktDir, 'config.yaml'), provider: 'mock',
[ model: 'mock-model',
'provider: mock', });
'model: mock-model',
'language: en',
'log_level: info',
'default_piece: default',
].join('\n'),
);
const createOutput = execFileSync( const createOutput = execFileSync(
'gh', 'gh',
@ -87,8 +85,12 @@ describe('E2E: Add task from GitHub issue (takt add)', () => {
const tasksFile = join(testRepo.path, '.takt', 'tasks.yaml'); const tasksFile = join(testRepo.path, '.takt', 'tasks.yaml');
const content = readFileSync(tasksFile, 'utf-8'); const content = readFileSync(tasksFile, 'utf-8');
const parsed = parseYaml(content) as { tasks?: Array<{ issue?: number }> }; const parsed = parseYaml(content) as { tasks?: Array<{ issue?: number; task_dir?: string }> };
expect(parsed.tasks?.length).toBe(1); expect(parsed.tasks?.length).toBe(1);
expect(parsed.tasks?.[0]?.issue).toBe(Number(issueNumber)); expect(parsed.tasks?.[0]?.issue).toBe(Number(issueNumber));
expect(parsed.tasks?.[0]?.task_dir).toBeTypeOf('string');
const orderPath = join(testRepo.path, String(parsed.tasks?.[0]?.task_dir), 'order.md');
expect(existsSync(orderPath)).toBe(true);
expect(readFileSync(orderPath, 'utf-8')).toContain('E2E Add Issue');
}, 240_000); }, 240_000);
}); });

View File

@ -0,0 +1,85 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { execFileSync } from 'node:child_process';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
import { runTakt } from '../helpers/takt-runner';
function createLocalRepo(): { path: string; cleanup: () => void } {
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-catalog-'));
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
writeFileSync(join(repoPath, 'README.md'), '# test\n');
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
return {
path: repoPath,
cleanup: () => {
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
},
};
}
// E2E更新時は docs/testing/e2e.md も更新すること
describe('E2E: Catalog command (takt catalog)', () => {
let isolatedEnv: IsolatedEnv;
let repo: { path: string; cleanup: () => void };
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
repo = createLocalRepo();
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('should list all facet types when no argument given', () => {
// Given: a local repo with isolated env
// When: running takt catalog
const result = runTakt({
args: ['catalog'],
cwd: repo.path,
env: isolatedEnv.env,
});
// Then: output contains facet type sections
expect(result.exitCode).toBe(0);
const output = result.stdout.toLowerCase();
expect(output).toMatch(/persona/);
});
it('should list facets for a specific type', () => {
// Given: a local repo with isolated env
// When: running takt catalog personas
const result = runTakt({
args: ['catalog', 'personas'],
cwd: repo.path,
env: isolatedEnv.env,
});
// Then: output contains persona names
expect(result.exitCode).toBe(0);
expect(result.stdout).toMatch(/coder/i);
});
it('should error for an invalid facet type', () => {
// Given: a local repo with isolated env
// When: running takt catalog with an invalid type
const result = runTakt({
args: ['catalog', 'invalidtype'],
cwd: repo.path,
env: isolatedEnv.env,
});
// Then: output contains an error or lists valid types
const combined = result.stdout + result.stderr;
expect(combined).toMatch(/invalid|not found|valid types|unknown/i);
});
});

View File

@ -0,0 +1,55 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { execFileSync } from 'node:child_process';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
import { runTakt } from '../helpers/takt-runner';
function createLocalRepo(): { path: string; cleanup: () => void } {
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-clear-'));
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
writeFileSync(join(repoPath, 'README.md'), '# test\n');
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
return {
path: repoPath,
cleanup: () => {
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
},
};
}
// E2E更新時は docs/testing/e2e.md も更新すること
describe('E2E: Clear sessions command (takt clear)', () => {
let isolatedEnv: IsolatedEnv;
let repo: { path: string; cleanup: () => void };
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
repo = createLocalRepo();
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('should clear sessions without error', () => {
// Given: a local repo with isolated env
// When: running takt clear
const result = runTakt({
args: ['clear'],
cwd: repo.path,
env: isolatedEnv.env,
});
// Then: exits cleanly
expect(result.exitCode).toBe(0);
const output = result.stdout.toLowerCase();
expect(output).toMatch(/clear|session|removed|no session/);
});
});

102
e2e/specs/cli-config.e2e.ts Normal file
View File

@ -0,0 +1,102 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { mkdtempSync, writeFileSync, readFileSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { execFileSync } from 'node:child_process';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
import { runTakt } from '../helpers/takt-runner';
function createLocalRepo(): { path: string; cleanup: () => void } {
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-config-'));
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
writeFileSync(join(repoPath, 'README.md'), '# test\n');
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
return {
path: repoPath,
cleanup: () => {
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
},
};
}
// E2E更新時は docs/testing/e2e.md も更新すること
describe('E2E: Config command (takt config)', () => {
let isolatedEnv: IsolatedEnv;
let repo: { path: string; cleanup: () => void };
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
repo = createLocalRepo();
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('should switch to default mode with explicit argument', () => {
// Given: a local repo with isolated env
// When: running takt config default
const result = runTakt({
args: ['config', 'default'],
cwd: repo.path,
env: isolatedEnv.env,
});
// Then: exits successfully and outputs switched message
expect(result.exitCode).toBe(0);
const output = result.stdout;
expect(output).toMatch(/Switched to: default/);
});
it('should switch to sacrifice-my-pc mode with explicit argument', () => {
// Given: a local repo with isolated env
// When: running takt config sacrifice-my-pc
const result = runTakt({
args: ['config', 'sacrifice-my-pc'],
cwd: repo.path,
env: isolatedEnv.env,
});
// Then: exits successfully and outputs switched message
expect(result.exitCode).toBe(0);
const output = result.stdout;
expect(output).toMatch(/Switched to: sacrifice-my-pc/);
});
it('should persist permission mode to project config', () => {
// Given: a local repo with isolated env
// When: running takt config sacrifice-my-pc
runTakt({
args: ['config', 'sacrifice-my-pc'],
cwd: repo.path,
env: isolatedEnv.env,
});
// Then: .takt/config.yaml contains permissionMode: sacrifice-my-pc
const configPath = join(repo.path, '.takt', 'config.yaml');
const content = readFileSync(configPath, 'utf-8');
expect(content).toMatch(/permissionMode:\s*sacrifice-my-pc/);
});
it('should report error for invalid mode name', () => {
// Given: a local repo with isolated env
// When: running takt config with an invalid mode
const result = runTakt({
args: ['config', 'invalid-mode'],
cwd: repo.path,
env: isolatedEnv.env,
});
// Then: output contains invalid mode message
const combined = result.stdout + result.stderr;
expect(combined).toMatch(/Invalid mode/);
});
});

View File

@ -0,0 +1,88 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { mkdtempSync, writeFileSync, existsSync, readdirSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { execFileSync } from 'node:child_process';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
import { runTakt } from '../helpers/takt-runner';
function createLocalRepo(): { path: string; cleanup: () => void } {
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-export-cc-'));
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
writeFileSync(join(repoPath, 'README.md'), '# test\n');
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
return {
path: repoPath,
cleanup: () => {
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
},
};
}
// E2E更新時は docs/testing/e2e.md も更新すること
describe('E2E: Export-cc command (takt export-cc)', () => {
let isolatedEnv: IsolatedEnv;
let repo: { path: string; cleanup: () => void };
let fakeHome: string;
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
repo = createLocalRepo();
fakeHome = mkdtempSync(join(tmpdir(), 'takt-e2e-export-cc-home-'));
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
try { rmSync(fakeHome, { recursive: true, force: true }); } catch { /* best-effort */ }
});
it('should deploy skill files to isolated home directory', () => {
// Given: a local repo with isolated env and HOME redirected to fakeHome
const env: NodeJS.ProcessEnv = { ...isolatedEnv.env, HOME: fakeHome };
// When: running takt export-cc
const result = runTakt({
args: ['export-cc'],
cwd: repo.path,
env,
});
// Then: exits successfully and outputs deploy message
expect(result.exitCode).toBe(0);
const output = result.stdout;
expect(output).toMatch(/ファイルをデプロイしました/);
// Then: SKILL.md exists in the skill directory
const skillMdPath = join(fakeHome, '.claude', 'skills', 'takt', 'SKILL.md');
expect(existsSync(skillMdPath)).toBe(true);
});
it('should deploy resource directories', () => {
// Given: a local repo with isolated env and HOME redirected to fakeHome
const env: NodeJS.ProcessEnv = { ...isolatedEnv.env, HOME: fakeHome };
// When: running takt export-cc
runTakt({
args: ['export-cc'],
cwd: repo.path,
env,
});
// Then: pieces/ and personas/ directories exist with at least one file each
const skillDir = join(fakeHome, '.claude', 'skills', 'takt');
const piecesDir = join(skillDir, 'pieces');
expect(existsSync(piecesDir)).toBe(true);
const pieceFiles = readdirSync(piecesDir);
expect(pieceFiles.length).toBeGreaterThan(0);
const personasDir = join(skillDir, 'personas');
expect(existsSync(personasDir)).toBe(true);
const personaFiles = readdirSync(personasDir);
expect(personaFiles.length).toBeGreaterThan(0);
});
});

73
e2e/specs/cli-help.e2e.ts Normal file
View File

@ -0,0 +1,73 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { execFileSync } from 'node:child_process';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
import { runTakt } from '../helpers/takt-runner';
function createLocalRepo(): { path: string; cleanup: () => void } {
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-help-'));
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
writeFileSync(join(repoPath, 'README.md'), '# test\n');
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
return {
path: repoPath,
cleanup: () => {
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
},
};
}
// E2E更新時は docs/testing/e2e.md も更新すること
describe('E2E: Help command (takt --help)', () => {
let isolatedEnv: IsolatedEnv;
let repo: { path: string; cleanup: () => void };
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
repo = createLocalRepo();
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('should display subcommand list with --help', () => {
// Given: a local repo with isolated env
// When: running takt --help
const result = runTakt({
args: ['--help'],
cwd: repo.path,
env: isolatedEnv.env,
});
// Then: output lists subcommands
expect(result.exitCode).toBe(0);
expect(result.stdout).toMatch(/run/);
expect(result.stdout).toMatch(/add/);
expect(result.stdout).toMatch(/list/);
expect(result.stdout).toMatch(/eject/);
});
it('should display run subcommand help with takt run --help', () => {
// Given: a local repo with isolated env
// When: running takt run --help
const result = runTakt({
args: ['run', '--help'],
cwd: repo.path,
env: isolatedEnv.env,
});
// Then: output contains run command description
expect(result.exitCode).toBe(0);
const output = result.stdout.toLowerCase();
expect(output).toMatch(/run|task|pending/);
});
});

View File

@ -0,0 +1,76 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { resolve, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { execFileSync } from 'node:child_process';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
import { runTakt } from '../helpers/takt-runner';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
function createLocalRepo(): { path: string; cleanup: () => void } {
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-prompt-'));
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
writeFileSync(join(repoPath, 'README.md'), '# test\n');
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
return {
path: repoPath,
cleanup: () => {
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
},
};
}
// E2E更新時は docs/testing/e2e.md も更新すること
describe('E2E: Prompt preview command (takt prompt)', () => {
let isolatedEnv: IsolatedEnv;
let repo: { path: string; cleanup: () => void };
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
repo = createLocalRepo();
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('should output prompt preview header and movement info for a piece', () => {
// Given: a piece file path
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
// When: running takt prompt with piece path
const result = runTakt({
args: ['prompt', piecePath],
cwd: repo.path,
env: isolatedEnv.env,
});
// Then: output contains "Prompt Preview" header and movement info
// (may fail on Phase 3 for pieces with tag-based rules, but header is still output)
const combined = result.stdout + result.stderr;
expect(combined).toMatch(/Prompt Preview|Movement 1/i);
});
it('should report not found for a nonexistent piece name', () => {
// Given: a nonexistent piece name
// When: running takt prompt with invalid piece
const result = runTakt({
args: ['prompt', 'nonexistent-piece-xyz'],
cwd: repo.path,
env: isolatedEnv.env,
});
// Then: reports piece not found
const combined = result.stdout + result.stderr;
expect(combined).toMatch(/not found/i);
});
});

View File

@ -0,0 +1,61 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { mkdtempSync, writeFileSync, readFileSync, existsSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { execFileSync } from 'node:child_process';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
import { runTakt } from '../helpers/takt-runner';
function createLocalRepo(): { path: string; cleanup: () => void } {
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-reset-'));
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
writeFileSync(join(repoPath, 'README.md'), '# test\n');
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
return {
path: repoPath,
cleanup: () => {
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
},
};
}
// E2E更新時は docs/testing/e2e.md も更新すること
describe('E2E: Reset categories command (takt reset categories)', () => {
let isolatedEnv: IsolatedEnv;
let repo: { path: string; cleanup: () => void };
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
repo = createLocalRepo();
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('should reset categories and create overlay file', () => {
// Given: a local repo with isolated env
// When: running takt reset categories
const result = runTakt({
args: ['reset', 'categories'],
cwd: repo.path,
env: isolatedEnv.env,
});
// Then: exits successfully and outputs reset message
expect(result.exitCode).toBe(0);
const output = result.stdout;
expect(output).toMatch(/reset/i);
// Then: piece-categories.yaml exists with initial content
const categoriesPath = join(isolatedEnv.taktDir, 'preferences', 'piece-categories.yaml');
expect(existsSync(categoriesPath)).toBe(true);
const content = readFileSync(categoriesPath, 'utf-8');
expect(content).toContain('piece_categories: {}');
});
});

View File

@ -0,0 +1,70 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { execFileSync } from 'node:child_process';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
import { runTakt } from '../helpers/takt-runner';
function createLocalRepo(): { path: string; cleanup: () => void } {
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-switch-'));
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
writeFileSync(join(repoPath, 'README.md'), '# test\n');
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
return {
path: repoPath,
cleanup: () => {
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
},
};
}
// E2E更新時は docs/testing/e2e.md も更新すること
describe('E2E: Switch piece command (takt switch)', () => {
let isolatedEnv: IsolatedEnv;
let repo: { path: string; cleanup: () => void };
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
repo = createLocalRepo();
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('should switch piece when a valid piece name is given', () => {
// Given: a local repo with isolated env
// When: running takt switch default
const result = runTakt({
args: ['switch', 'default'],
cwd: repo.path,
env: isolatedEnv.env,
});
// Then: exits successfully
expect(result.exitCode).toBe(0);
const output = result.stdout.toLowerCase();
expect(output).toMatch(/default|switched|piece/);
});
it('should error when a nonexistent piece name is given', () => {
// Given: a local repo with isolated env
// When: running takt switch with a nonexistent piece name
const result = runTakt({
args: ['switch', 'nonexistent-piece-xyz'],
cwd: repo.path,
env: isolatedEnv.env,
});
// Then: error output
const combined = result.stdout + result.stderr;
expect(combined).toMatch(/not found|error|does not exist/i);
});
});

View File

@ -0,0 +1,157 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { resolve, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { execFileSync } from 'node:child_process';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
import { runTakt } from '../helpers/takt-runner';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
function createLocalRepo(): { path: string; cleanup: () => void } {
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-error-'));
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
writeFileSync(join(repoPath, 'README.md'), '# test\n');
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
return {
path: repoPath,
cleanup: () => {
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
},
};
}
// E2E更新時は docs/testing/e2e.md も更新すること
describe('E2E: Error handling edge cases (mock)', () => {
let isolatedEnv: IsolatedEnv;
let repo: { path: string; cleanup: () => void };
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
repo = createLocalRepo();
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('should error when --piece points to a nonexistent file path', () => {
// Given: a nonexistent piece file path
// When: running with a bad piece path
const result = runTakt({
args: [
'--task', 'test',
'--piece', '/nonexistent/path/to/piece.yaml',
'--create-worktree', 'no',
'--provider', 'mock',
],
cwd: repo.path,
env: isolatedEnv.env,
timeout: 240_000,
});
// Then: exits with error
expect(result.exitCode).not.toBe(0);
const combined = result.stdout + result.stderr;
expect(combined).toMatch(/not found|does not exist|ENOENT/i);
}, 240_000);
it('should report error when --piece specifies a nonexistent piece name', () => {
// Given: a nonexistent piece name
// When: running with a bad piece name
const result = runTakt({
args: [
'--task', 'test',
'--piece', 'nonexistent-piece-name-xyz',
'--create-worktree', 'no',
'--provider', 'mock',
],
cwd: repo.path,
env: isolatedEnv.env,
timeout: 240_000,
});
// Then: output contains error about piece not found
// Note: takt reports the error but currently exits with code 0
const combined = result.stdout + result.stderr;
expect(combined).toMatch(/not found/i);
}, 240_000);
it('should error when --pipeline is used without --task or --issue', () => {
// Given: pipeline mode with no task or issue
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
// When: running in pipeline mode without a task
const result = runTakt({
args: [
'--pipeline',
'--piece', piecePath,
'--skip-git',
'--provider', 'mock',
],
cwd: repo.path,
env: isolatedEnv.env,
timeout: 240_000,
});
// Then: exits with error (should not hang in interactive mode due to TAKT_NO_TTY=1)
expect(result.exitCode).not.toBe(0);
const combined = result.stdout + result.stderr;
expect(combined).toMatch(/task|issue|required/i);
}, 240_000);
it('should error when --create-worktree receives an invalid value', () => {
// Given: invalid worktree value
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
// When: running with invalid worktree option
const result = runTakt({
args: [
'--task', 'test',
'--piece', piecePath,
'--create-worktree', 'invalid-value',
'--provider', 'mock',
],
cwd: repo.path,
env: isolatedEnv.env,
timeout: 240_000,
});
// Then: exits with error or warning about invalid value
const combined = result.stdout + result.stderr;
const hasError = result.exitCode !== 0 || combined.match(/invalid|error|must be/i);
expect(hasError).toBeTruthy();
}, 240_000);
it('should error when piece file contains invalid YAML', () => {
// Given: a broken YAML piece file
const brokenPiecePath = resolve(__dirname, '../fixtures/pieces/broken.yaml');
// When: running with the broken piece
const result = runTakt({
args: [
'--task', 'test',
'--piece', brokenPiecePath,
'--create-worktree', 'no',
'--provider', 'mock',
],
cwd: repo.path,
env: isolatedEnv.env,
timeout: 240_000,
});
// Then: exits with error about parsing
expect(result.exitCode).not.toBe(0);
const combined = result.stdout + result.stderr;
expect(combined).toMatch(/parse|invalid|error|validation/i);
}, 240_000);
});

View File

@ -0,0 +1,124 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { resolve, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { execFileSync } from 'node:child_process';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
import { runTakt } from '../helpers/takt-runner';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
function createLocalRepo(): { path: string; cleanup: () => void } {
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-piece-err-'));
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
writeFileSync(join(repoPath, 'README.md'), '# test\n');
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
return {
path: repoPath,
cleanup: () => {
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
},
};
}
// E2E更新時は docs/testing/e2e.md も更新すること
describe('E2E: Piece error handling (mock)', () => {
let isolatedEnv: IsolatedEnv;
let repo: { path: string; cleanup: () => void };
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
repo = createLocalRepo();
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('should abort when agent returns error status', () => {
// Given: a piece and a scenario that returns error status
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-no-match.yaml');
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/no-match.json');
// When: executing the piece
const result = runTakt({
args: [
'--task', 'Test error status abort',
'--piece', piecePath,
'--create-worktree', 'no',
'--provider', 'mock',
],
cwd: repo.path,
env: {
...isolatedEnv.env,
TAKT_MOCK_SCENARIO: scenarioPath,
},
timeout: 240_000,
});
// Then: piece aborts with a non-zero exit code
expect(result.exitCode).not.toBe(0);
const combined = result.stdout + result.stderr;
expect(combined).toMatch(/failed|aborted|error/i);
}, 240_000);
it('should abort when max_movements is reached', () => {
// Given: a piece with max_movements=2 that loops between step-a and step-b
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-max-iter.yaml');
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/max-iter-loop.json');
// When: executing the piece
const result = runTakt({
args: [
'--task', 'Test max movements',
'--piece', piecePath,
'--create-worktree', 'no',
'--provider', 'mock',
],
cwd: repo.path,
env: {
...isolatedEnv.env,
TAKT_MOCK_SCENARIO: scenarioPath,
},
timeout: 240_000,
});
// Then: piece aborts due to iteration limit
expect(result.exitCode).not.toBe(0);
const combined = result.stdout + result.stderr;
expect(combined).toMatch(/Max movements|iteration|aborted/i);
}, 240_000);
it('should pass previous response between sequential steps', () => {
// Given: a two-step piece and a scenario with distinct step outputs
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-two-step.yaml');
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/two-step-done.json');
// When: executing the piece
const result = runTakt({
args: [
'--task', 'Test previous response passing',
'--piece', piecePath,
'--create-worktree', 'no',
'--provider', 'mock',
],
cwd: repo.path,
env: {
...isolatedEnv.env,
TAKT_MOCK_SCENARIO: scenarioPath,
},
timeout: 240_000,
});
// Then: piece completes successfully (both steps execute)
expect(result.exitCode).toBe(0);
expect(result.stdout).toContain('Piece completed');
}, 240_000);
});

View File

@ -0,0 +1,131 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { resolve, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { execFileSync } from 'node:child_process';
import {
createIsolatedEnv,
updateIsolatedConfig,
type IsolatedEnv,
} from '../helpers/isolated-env';
import { runTakt } from '../helpers/takt-runner';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
function createLocalRepo(): { path: string; cleanup: () => void } {
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-provider-'));
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
writeFileSync(join(repoPath, 'README.md'), '# test\n');
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
return {
path: repoPath,
cleanup: () => {
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
},
};
}
// E2E更新時は docs/testing/e2e.md も更新すること
describe('E2E: Provider error handling (mock)', () => {
let isolatedEnv: IsolatedEnv;
let repo: { path: string; cleanup: () => void };
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
repo = createLocalRepo();
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('should override config provider with --provider flag', () => {
// Given: config.yaml has provider: claude, but CLI flag specifies mock
updateIsolatedConfig(isolatedEnv.taktDir, {
provider: 'claude',
});
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/execute-done.json');
// When: running with --provider mock
const result = runTakt({
args: [
'--task', 'Test provider override',
'--piece', piecePath,
'--create-worktree', 'no',
'--provider', 'mock',
],
cwd: repo.path,
env: {
...isolatedEnv.env,
TAKT_MOCK_SCENARIO: scenarioPath,
},
timeout: 240_000,
});
// Then: executes successfully with mock provider
expect(result.exitCode).toBe(0);
expect(result.stdout).toContain('Piece completed');
}, 240_000);
it('should use default mock response when scenario entries are exhausted', () => {
// Given: a two-step piece with only 1 scenario entry
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-two-step.yaml');
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/one-entry-only.json');
// When: executing the piece (step-2 will have no scenario entry)
const result = runTakt({
args: [
'--task', 'Test scenario exhaustion',
'--piece', piecePath,
'--create-worktree', 'no',
'--provider', 'mock',
],
cwd: repo.path,
env: {
...isolatedEnv.env,
TAKT_MOCK_SCENARIO: scenarioPath,
},
timeout: 240_000,
});
// Then: does not crash; either completes or aborts gracefully
const combined = result.stdout + result.stderr;
expect(combined).not.toContain('UnhandledPromiseRejection');
expect(combined).not.toContain('SIGTERM');
}, 240_000);
it('should error when scenario file does not exist', () => {
// Given: TAKT_MOCK_SCENARIO pointing to a non-existent file
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
// When: executing with a bad scenario path
const result = runTakt({
args: [
'--task', 'Test bad scenario',
'--piece', piecePath,
'--create-worktree', 'no',
'--provider', 'mock',
],
cwd: repo.path,
env: {
...isolatedEnv.env,
TAKT_MOCK_SCENARIO: '/nonexistent/path/scenario.json',
},
timeout: 240_000,
});
// Then: exits with error and clear message
expect(result.exitCode).not.toBe(0);
const combined = result.stdout + result.stderr;
expect(combined).toMatch(/[Ss]cenario file not found|ENOENT/);
}, 240_000);
});

View File

@ -0,0 +1,72 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { resolve, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { execFileSync } from 'node:child_process';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
import { runTakt } from '../helpers/takt-runner';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
function createLocalRepo(): { path: string; cleanup: () => void } {
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-quiet-'));
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
writeFileSync(join(repoPath, 'README.md'), '# test\n');
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
return {
path: repoPath,
cleanup: () => {
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
},
};
}
// E2E更新時は docs/testing/e2e.md も更新すること
describe('E2E: Quiet mode (--quiet)', () => {
let isolatedEnv: IsolatedEnv;
let repo: { path: string; cleanup: () => void };
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
repo = createLocalRepo();
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('should suppress AI stream output in quiet mode', () => {
// Given: a simple piece and scenario
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/execute-done.json');
// When: running with --quiet flag
const result = runTakt({
args: [
'--task', 'Test quiet mode',
'--piece', piecePath,
'--create-worktree', 'no',
'--provider', 'mock',
'--quiet',
],
cwd: repo.path,
env: {
...isolatedEnv.env,
TAKT_MOCK_SCENARIO: scenarioPath,
},
timeout: 240_000,
});
// Then: completes successfully; mock content should not appear in output
expect(result.exitCode).toBe(0);
// In quiet mode, the raw mock response text should be suppressed
expect(result.stdout).not.toContain('Mock response for persona');
}, 240_000);
});

View File

@ -0,0 +1,183 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { resolve, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { mkdtempSync, mkdirSync, writeFileSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { execFileSync } from 'node:child_process';
import {
createIsolatedEnv,
updateIsolatedConfig,
type IsolatedEnv,
} from '../helpers/isolated-env';
import { runTakt } from '../helpers/takt-runner';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
function createLocalRepo(): { path: string; cleanup: () => void } {
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-run-multi-'));
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
writeFileSync(join(repoPath, 'README.md'), '# test\n');
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
return {
path: repoPath,
cleanup: () => {
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
},
};
}
// E2E更新時は docs/testing/e2e.md も更新すること
describe('E2E: Run multiple tasks (takt run)', () => {
let isolatedEnv: IsolatedEnv;
let repo: { path: string; cleanup: () => void };
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
repo = createLocalRepo();
// Override config to use mock provider
updateIsolatedConfig(isolatedEnv.taktDir, {
provider: 'mock',
});
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('should execute all pending tasks sequentially', () => {
// Given: 3 pending tasks in tasks.yaml
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/run-three-tasks.json');
const now = new Date().toISOString();
mkdirSync(join(repo.path, '.takt'), { recursive: true });
writeFileSync(
join(repo.path, '.takt', 'tasks.yaml'),
[
'tasks:',
' - name: task-1',
' status: pending',
' content: "E2E task 1"',
` piece: "${piecePath}"`,
` created_at: "${now}"`,
' started_at: null',
' completed_at: null',
' - name: task-2',
' status: pending',
' content: "E2E task 2"',
` piece: "${piecePath}"`,
` created_at: "${now}"`,
' started_at: null',
' completed_at: null',
' - name: task-3',
' status: pending',
' content: "E2E task 3"',
` piece: "${piecePath}"`,
` created_at: "${now}"`,
' started_at: null',
' completed_at: null',
].join('\n'),
'utf-8',
);
// When: running takt run
const result = runTakt({
args: ['run', '--provider', 'mock'],
cwd: repo.path,
env: {
...isolatedEnv.env,
TAKT_MOCK_SCENARIO: scenarioPath,
},
timeout: 240_000,
});
// Then: all 3 tasks complete
expect(result.exitCode).toBe(0);
const combined = result.stdout + result.stderr;
expect(combined).toContain('task-1');
expect(combined).toContain('task-2');
expect(combined).toContain('task-3');
}, 240_000);
it('should continue remaining tasks when one task fails', () => {
// Given: 3 tasks where the 2nd will fail (error status)
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/run-with-failure.json');
const now = new Date().toISOString();
mkdirSync(join(repo.path, '.takt'), { recursive: true });
writeFileSync(
join(repo.path, '.takt', 'tasks.yaml'),
[
'tasks:',
' - name: task-ok-1',
' status: pending',
' content: "Should succeed"',
` piece: "${piecePath}"`,
` created_at: "${now}"`,
' started_at: null',
' completed_at: null',
' - name: task-fail',
' status: pending',
' content: "Should fail"',
` piece: "${piecePath}"`,
` created_at: "${now}"`,
' started_at: null',
' completed_at: null',
' - name: task-ok-2',
' status: pending',
' content: "Should succeed after failure"',
` piece: "${piecePath}"`,
` created_at: "${now}"`,
' started_at: null',
' completed_at: null',
].join('\n'),
'utf-8',
);
// When: running takt run
const result = runTakt({
args: ['run', '--provider', 'mock'],
cwd: repo.path,
env: {
...isolatedEnv.env,
TAKT_MOCK_SCENARIO: scenarioPath,
},
timeout: 240_000,
});
// Then: exit code is non-zero (failure occurred), but task-ok-2 was still attempted
const combined = result.stdout + result.stderr;
expect(combined).toContain('task-ok-1');
expect(combined).toContain('task-fail');
expect(combined).toContain('task-ok-2');
}, 240_000);
it('should exit cleanly when no pending tasks exist', () => {
// Given: an empty tasks.yaml
mkdirSync(join(repo.path, '.takt'), { recursive: true });
writeFileSync(
join(repo.path, '.takt', 'tasks.yaml'),
'tasks: []\n',
'utf-8',
);
// When: running takt run
const result = runTakt({
args: ['run', '--provider', 'mock'],
cwd: repo.path,
env: isolatedEnv.env,
timeout: 240_000,
});
// Then: exits cleanly with code 0
expect(result.exitCode).toBe(0);
}, 240_000);
});

View File

@ -3,7 +3,11 @@ import { spawn } from 'node:child_process';
import { mkdirSync, writeFileSync, readFileSync } from 'node:fs'; import { mkdirSync, writeFileSync, readFileSync } from 'node:fs';
import { join, resolve, dirname } from 'node:path'; import { join, resolve, dirname } from 'node:path';
import { fileURLToPath } from 'node:url'; import { fileURLToPath } from 'node:url';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env'; import {
createIsolatedEnv,
updateIsolatedConfig,
type IsolatedEnv,
} from '../helpers/isolated-env';
import { createTestRepo, type TestRepo } from '../helpers/test-repo'; import { createTestRepo, type TestRepo } from '../helpers/test-repo';
const __filename = fileURLToPath(import.meta.url); const __filename = fileURLToPath(import.meta.url);
@ -50,18 +54,12 @@ describe('E2E: Run tasks graceful shutdown on SIGINT (parallel)', () => {
isolatedEnv = createIsolatedEnv(); isolatedEnv = createIsolatedEnv();
testRepo = createTestRepo(); testRepo = createTestRepo();
writeFileSync( updateIsolatedConfig(isolatedEnv.taktDir, {
join(isolatedEnv.taktDir, 'config.yaml'), provider: 'mock',
[ model: 'mock-model',
'provider: mock', concurrency: 2,
'model: mock-model', task_poll_interval_ms: 100,
'language: en', });
'log_level: info',
'default_piece: default',
'concurrency: 2',
'task_poll_interval_ms: 100',
].join('\n'),
);
}); });
afterEach(() => { afterEach(() => {

View File

@ -0,0 +1,134 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { resolve, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { mkdtempSync, mkdirSync, writeFileSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { execFileSync } from 'node:child_process';
import {
createIsolatedEnv,
updateIsolatedConfig,
type IsolatedEnv,
} from '../helpers/isolated-env';
import { runTakt } from '../helpers/takt-runner';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
function createLocalRepo(): { path: string; cleanup: () => void } {
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-contentfile-'));
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
writeFileSync(join(repoPath, 'README.md'), '# test\n');
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
return {
path: repoPath,
cleanup: () => {
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
},
};
}
// E2E更新時は docs/testing/e2e.md も更新すること
describe('E2E: Task content_file reference (mock)', () => {
let isolatedEnv: IsolatedEnv;
let repo: { path: string; cleanup: () => void };
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
repo = createLocalRepo();
updateIsolatedConfig(isolatedEnv.taktDir, {
provider: 'mock',
});
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('should execute task using content_file reference', () => {
// Given: a task with content_file pointing to an existing file
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/execute-done.json');
const now = new Date().toISOString();
mkdirSync(join(repo.path, '.takt'), { recursive: true });
// Create the content file
writeFileSync(
join(repo.path, 'task-content.txt'),
'Create a noop file for E2E testing.',
'utf-8',
);
writeFileSync(
join(repo.path, '.takt', 'tasks.yaml'),
[
'tasks:',
' - name: content-file-task',
' status: pending',
' content_file: "./task-content.txt"',
` piece: "${piecePath}"`,
` created_at: "${now}"`,
' started_at: null',
' completed_at: null',
].join('\n'),
'utf-8',
);
// When: running takt run
const result = runTakt({
args: ['run', '--provider', 'mock'],
cwd: repo.path,
env: {
...isolatedEnv.env,
TAKT_MOCK_SCENARIO: scenarioPath,
},
timeout: 240_000,
});
// Then: task executes successfully
expect(result.exitCode).toBe(0);
const combined = result.stdout + result.stderr;
expect(combined).toContain('content-file-task');
}, 240_000);
it('should fail when content_file references a nonexistent file', () => {
// Given: a task with content_file pointing to a nonexistent file
const now = new Date().toISOString();
mkdirSync(join(repo.path, '.takt'), { recursive: true });
writeFileSync(
join(repo.path, '.takt', 'tasks.yaml'),
[
'tasks:',
' - name: bad-content-file-task',
' status: pending',
' content_file: "./nonexistent-content.txt"',
` piece: "${piecePath}"`,
` created_at: "${now}"`,
' started_at: null',
' completed_at: null',
].join('\n'),
'utf-8',
);
// When: running takt run
const result = runTakt({
args: ['run', '--provider', 'mock'],
cwd: repo.path,
env: isolatedEnv.env,
timeout: 240_000,
});
// Then: task fails with a meaningful error
const combined = result.stdout + result.stderr;
expect(combined).toMatch(/not found|ENOENT|missing|error/i);
}, 240_000);
});

View File

@ -96,6 +96,6 @@ describe('E2E: Watch tasks (takt watch)', () => {
const tasksRaw = readFileSync(tasksFile, 'utf-8'); const tasksRaw = readFileSync(tasksFile, 'utf-8');
const parsed = parseYaml(tasksRaw) as { tasks?: Array<{ name?: string; status?: string }> }; const parsed = parseYaml(tasksRaw) as { tasks?: Array<{ name?: string; status?: string }> };
const watchTask = parsed.tasks?.find((task) => task.name === 'watch-task'); const watchTask = parsed.tasks?.find((task) => task.name === 'watch-task');
expect(watchTask?.status).toBe('completed'); expect(watchTask).toBeUndefined();
}, 240_000); }, 240_000);
}); });

11
package-lock.json generated
View File

@ -1,16 +1,17 @@
{ {
"name": "takt", "name": "takt",
"version": "0.11.0", "version": "0.12.0",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "takt", "name": "takt",
"version": "0.11.0", "version": "0.12.0",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@anthropic-ai/claude-agent-sdk": "^0.2.37", "@anthropic-ai/claude-agent-sdk": "^0.2.37",
"@openai/codex-sdk": "^0.98.0", "@openai/codex-sdk": "^0.98.0",
"@opencode-ai/sdk": "^1.1.53",
"chalk": "^5.3.0", "chalk": "^5.3.0",
"commander": "^12.1.0", "commander": "^12.1.0",
"update-notifier": "^7.3.1", "update-notifier": "^7.3.1",
@ -936,6 +937,12 @@
"node": ">=18" "node": ">=18"
} }
}, },
"node_modules/@opencode-ai/sdk": {
"version": "1.1.53",
"resolved": "https://registry.npmjs.org/@opencode-ai/sdk/-/sdk-1.1.53.tgz",
"integrity": "sha512-RUIVnPOP1CyyU32FrOOYuE7Ge51lOBuhaFp2NSX98ncApT7ffoNetmwzqrhOiJQgZB1KrbCHLYOCK6AZfacxag==",
"license": "MIT"
},
"node_modules/@pnpm/config.env-replace": { "node_modules/@pnpm/config.env-replace": {
"version": "1.1.0", "version": "1.1.0",
"resolved": "https://registry.npmjs.org/@pnpm/config.env-replace/-/config.env-replace-1.1.0.tgz", "resolved": "https://registry.npmjs.org/@pnpm/config.env-replace/-/config.env-replace-1.1.0.tgz",

View File

@ -1,6 +1,6 @@
{ {
"name": "takt", "name": "takt",
"version": "0.11.1", "version": "0.12.0",
"description": "TAKT: Task Agent Koordination Tool - AI Agent Piece Orchestration", "description": "TAKT: Task Agent Koordination Tool - AI Agent Piece Orchestration",
"main": "dist/index.js", "main": "dist/index.js",
"types": "dist/index.d.ts", "types": "dist/index.d.ts",
@ -20,8 +20,10 @@
"test:e2e:provider": "npm run test:e2e:provider:claude && npm run test:e2e:provider:codex", "test:e2e:provider": "npm run test:e2e:provider:claude && npm run test:e2e:provider:codex",
"test:e2e:provider:claude": "TAKT_E2E_PROVIDER=claude vitest run --config vitest.config.e2e.provider.ts --reporter=verbose", "test:e2e:provider:claude": "TAKT_E2E_PROVIDER=claude vitest run --config vitest.config.e2e.provider.ts --reporter=verbose",
"test:e2e:provider:codex": "TAKT_E2E_PROVIDER=codex vitest run --config vitest.config.e2e.provider.ts --reporter=verbose", "test:e2e:provider:codex": "TAKT_E2E_PROVIDER=codex vitest run --config vitest.config.e2e.provider.ts --reporter=verbose",
"test:e2e:provider:opencode": "TAKT_E2E_PROVIDER=opencode vitest run --config vitest.config.e2e.provider.ts --reporter=verbose",
"test:e2e:claude": "npm run test:e2e:provider:claude", "test:e2e:claude": "npm run test:e2e:provider:claude",
"test:e2e:codex": "npm run test:e2e:provider:codex", "test:e2e:codex": "npm run test:e2e:provider:codex",
"test:e2e:opencode": "npm run test:e2e:provider:opencode",
"lint": "eslint src/", "lint": "eslint src/",
"prepublishOnly": "npm run lint && npm run build && npm run test" "prepublishOnly": "npm run lint && npm run build && npm run test"
}, },
@ -59,6 +61,7 @@
"dependencies": { "dependencies": {
"@anthropic-ai/claude-agent-sdk": "^0.2.37", "@anthropic-ai/claude-agent-sdk": "^0.2.37",
"@openai/codex-sdk": "^0.98.0", "@openai/codex-sdk": "^0.98.0",
"@opencode-ai/sdk": "^1.1.53",
"chalk": "^5.3.0", "chalk": "^5.3.0",
"commander": "^12.1.0", "commander": "^12.1.0",
"update-notifier": "^7.3.1", "update-notifier": "^7.3.1",

View File

@ -22,7 +22,7 @@ describe('StreamDisplay', () => {
describe('progress info display', () => { describe('progress info display', () => {
const progressInfo: ProgressInfo = { const progressInfo: ProgressInfo = {
iteration: 3, iteration: 3,
maxIterations: 10, maxMovements: 10,
movementIndex: 1, movementIndex: 1,
totalMovements: 4, totalMovements: 4,
}; };
@ -253,7 +253,7 @@ describe('StreamDisplay', () => {
it('should format progress as (iteration/max) step index/total', () => { it('should format progress as (iteration/max) step index/total', () => {
const progressInfo: ProgressInfo = { const progressInfo: ProgressInfo = {
iteration: 5, iteration: 5,
maxIterations: 20, maxMovements: 20,
movementIndex: 2, movementIndex: 2,
totalMovements: 6, totalMovements: 6,
}; };
@ -267,7 +267,7 @@ describe('StreamDisplay', () => {
it('should convert 0-indexed movementIndex to 1-indexed display', () => { it('should convert 0-indexed movementIndex to 1-indexed display', () => {
const progressInfo: ProgressInfo = { const progressInfo: ProgressInfo = {
iteration: 1, iteration: 1,
maxIterations: 10, maxMovements: 10,
movementIndex: 0, // First movement (0-indexed) movementIndex: 0, // First movement (0-indexed)
totalMovements: 4, totalMovements: 4,
}; };

View File

@ -8,11 +8,6 @@ vi.mock('../features/interactive/index.js', () => ({
interactiveMode: vi.fn(), interactiveMode: vi.fn(),
})); }));
vi.mock('../infra/config/global/globalConfig.js', () => ({
loadGlobalConfig: vi.fn(() => ({ provider: 'claude' })),
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
}));
vi.mock('../shared/prompt/index.js', () => ({ vi.mock('../shared/prompt/index.js', () => ({
promptInput: vi.fn(), promptInput: vi.fn(),
confirm: vi.fn(), confirm: vi.fn(),
@ -23,6 +18,7 @@ vi.mock('../shared/ui/index.js', () => ({
info: vi.fn(), info: vi.fn(),
blankLine: vi.fn(), blankLine: vi.fn(),
error: vi.fn(), error: vi.fn(),
withProgress: vi.fn(async (_start, _done, operation) => operation()),
})); }));
vi.mock('../shared/utils/index.js', async (importOriginal) => ({ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
@ -38,15 +34,6 @@ vi.mock('../features/tasks/execute/selectAndExecute.js', () => ({
determinePiece: vi.fn(), determinePiece: vi.fn(),
})); }));
vi.mock('../infra/config/loaders/pieceResolver.js', () => ({
getPieceDescription: vi.fn(() => ({
name: 'default',
description: '',
pieceStructure: '1. implement\n2. review',
movementPreviews: [],
})),
}));
vi.mock('../infra/github/issue.js', () => ({ vi.mock('../infra/github/issue.js', () => ({
isIssueReference: vi.fn((s: string) => /^#\d+$/.test(s)), isIssueReference: vi.fn((s: string) => /^#\d+$/.test(s)),
resolveIssueTask: vi.fn(), resolveIssueTask: vi.fn(),
@ -65,15 +52,17 @@ vi.mock('../infra/github/issue.js', () => ({
import { interactiveMode } from '../features/interactive/index.js'; import { interactiveMode } from '../features/interactive/index.js';
import { promptInput, confirm } from '../shared/prompt/index.js'; import { promptInput, confirm } from '../shared/prompt/index.js';
import { info } from '../shared/ui/index.js';
import { determinePiece } from '../features/tasks/execute/selectAndExecute.js'; import { determinePiece } from '../features/tasks/execute/selectAndExecute.js';
import { resolveIssueTask } from '../infra/github/issue.js'; import { resolveIssueTask } from '../infra/github/issue.js';
import { addTask } from '../features/tasks/index.js'; import { addTask } from '../features/tasks/index.js';
const mockResolveIssueTask = vi.mocked(resolveIssueTask);
const mockInteractiveMode = vi.mocked(interactiveMode); const mockInteractiveMode = vi.mocked(interactiveMode);
const mockPromptInput = vi.mocked(promptInput); const mockPromptInput = vi.mocked(promptInput);
const mockConfirm = vi.mocked(confirm); const mockConfirm = vi.mocked(confirm);
const mockInfo = vi.mocked(info);
const mockDeterminePiece = vi.mocked(determinePiece); const mockDeterminePiece = vi.mocked(determinePiece);
const mockResolveIssueTask = vi.mocked(resolveIssueTask);
let testDir: string; let testDir: string;
@ -96,23 +85,42 @@ afterEach(() => {
}); });
describe('addTask', () => { describe('addTask', () => {
it('should create task entry from interactive result', async () => { function readOrderContent(dir: string, taskDir: unknown): string {
mockInteractiveMode.mockResolvedValue({ action: 'execute', task: '# 認証機能追加\nJWT認証を実装する' }); return fs.readFileSync(path.join(dir, String(taskDir), 'order.md'), 'utf-8');
}
it('should show usage and exit when task is missing', async () => {
await addTask(testDir); await addTask(testDir);
const tasks = loadTasks(testDir).tasks; expect(mockInfo).toHaveBeenCalledWith('Usage: takt add <task>');
expect(tasks).toHaveLength(1); expect(mockDeterminePiece).not.toHaveBeenCalled();
expect(tasks[0]?.content).toContain('JWT認証を実装する'); expect(fs.existsSync(path.join(testDir, '.takt', 'tasks.yaml'))).toBe(false);
expect(tasks[0]?.piece).toBe('default'); });
it('should show usage and exit when task is blank', async () => {
await addTask(testDir, ' ');
expect(mockInfo).toHaveBeenCalledWith('Usage: takt add <task>');
expect(mockDeterminePiece).not.toHaveBeenCalled();
expect(fs.existsSync(path.join(testDir, '.takt', 'tasks.yaml'))).toBe(false);
});
it('should save plain text task without interactive mode', async () => {
await addTask(testDir, ' JWT認証を実装する ');
expect(mockInteractiveMode).not.toHaveBeenCalled();
const task = loadTasks(testDir).tasks[0]!;
expect(task.content).toBeUndefined();
expect(task.task_dir).toBeTypeOf('string');
expect(readOrderContent(testDir, task.task_dir)).toContain('JWT認証を実装する');
expect(task.piece).toBe('default');
}); });
it('should include worktree settings when enabled', async () => { it('should include worktree settings when enabled', async () => {
mockInteractiveMode.mockResolvedValue({ action: 'execute', task: 'Task content' });
mockConfirm.mockResolvedValue(true); mockConfirm.mockResolvedValue(true);
mockPromptInput.mockResolvedValueOnce('/custom/path').mockResolvedValueOnce('feat/branch'); mockPromptInput.mockResolvedValueOnce('/custom/path').mockResolvedValueOnce('feat/branch');
await addTask(testDir); await addTask(testDir, 'Task content');
const task = loadTasks(testDir).tasks[0]!; const task = loadTasks(testDir).tasks[0]!;
expect(task.worktree).toBe('/custom/path'); expect(task.worktree).toBe('/custom/path');
@ -121,20 +129,20 @@ describe('addTask', () => {
it('should create task from issue reference without interactive mode', async () => { it('should create task from issue reference without interactive mode', async () => {
mockResolveIssueTask.mockReturnValue('Issue #99: Fix login timeout'); mockResolveIssueTask.mockReturnValue('Issue #99: Fix login timeout');
mockConfirm.mockResolvedValue(false);
await addTask(testDir, '#99'); await addTask(testDir, '#99');
expect(mockInteractiveMode).not.toHaveBeenCalled(); expect(mockInteractiveMode).not.toHaveBeenCalled();
const task = loadTasks(testDir).tasks[0]!; const task = loadTasks(testDir).tasks[0]!;
expect(task.content).toContain('Fix login timeout'); expect(task.content).toBeUndefined();
expect(readOrderContent(testDir, task.task_dir)).toContain('Fix login timeout');
expect(task.issue).toBe(99); expect(task.issue).toBe(99);
}); });
it('should not create task when piece selection is cancelled', async () => { it('should not create task when piece selection is cancelled', async () => {
mockDeterminePiece.mockResolvedValue(null); mockDeterminePiece.mockResolvedValue(null);
await addTask(testDir); await addTask(testDir, 'Task content');
expect(fs.existsSync(path.join(testDir, '.takt', 'tasks.yaml'))).toBe(false); expect(fs.existsSync(path.join(testDir, '.takt', 'tasks.yaml'))).toBe(false);
}); });

View File

@ -32,7 +32,7 @@ vi.mock('../infra/config/paths.js', async (importOriginal) => {
}); });
// Import after mocking // Import after mocking
const { loadGlobalConfig, saveGlobalConfig, resolveAnthropicApiKey, resolveOpenaiApiKey, invalidateGlobalConfigCache } = await import('../infra/config/global/globalConfig.js'); const { loadGlobalConfig, saveGlobalConfig, resolveAnthropicApiKey, resolveOpenaiApiKey, resolveOpencodeApiKey, invalidateGlobalConfigCache } = await import('../infra/config/global/globalConfig.js');
describe('GlobalConfigSchema API key fields', () => { describe('GlobalConfigSchema API key fields', () => {
it('should accept config without API keys', () => { it('should accept config without API keys', () => {
@ -280,3 +280,65 @@ describe('resolveOpenaiApiKey', () => {
expect(key).toBeUndefined(); expect(key).toBeUndefined();
}); });
}); });
describe('resolveOpencodeApiKey', () => {
const originalEnv = process.env['TAKT_OPENCODE_API_KEY'];
beforeEach(() => {
invalidateGlobalConfigCache();
mkdirSync(taktDir, { recursive: true });
});
afterEach(() => {
if (originalEnv !== undefined) {
process.env['TAKT_OPENCODE_API_KEY'] = originalEnv;
} else {
delete process.env['TAKT_OPENCODE_API_KEY'];
}
rmSync(testDir, { recursive: true, force: true });
});
it('should return env var when set', () => {
process.env['TAKT_OPENCODE_API_KEY'] = 'sk-opencode-from-env';
const yaml = [
'language: en',
'default_piece: default',
'log_level: info',
'provider: claude',
'opencode_api_key: sk-opencode-from-yaml',
].join('\n');
writeFileSync(configPath, yaml, 'utf-8');
const key = resolveOpencodeApiKey();
expect(key).toBe('sk-opencode-from-env');
});
it('should fall back to config when env var is not set', () => {
delete process.env['TAKT_OPENCODE_API_KEY'];
const yaml = [
'language: en',
'default_piece: default',
'log_level: info',
'provider: claude',
'opencode_api_key: sk-opencode-from-yaml',
].join('\n');
writeFileSync(configPath, yaml, 'utf-8');
const key = resolveOpencodeApiKey();
expect(key).toBe('sk-opencode-from-yaml');
});
it('should return undefined when neither env var nor config is set', () => {
delete process.env['TAKT_OPENCODE_API_KEY'];
const yaml = [
'language: en',
'default_piece: default',
'log_level: info',
'provider: claude',
].join('\n');
writeFileSync(configPath, yaml, 'utf-8');
const key = resolveOpencodeApiKey();
expect(key).toBeUndefined();
});
});

View File

@ -0,0 +1,136 @@
/**
* Tests for CSV data source parsing and batch reading.
*/
import { describe, it, expect } from 'vitest';
import { parseCsv, CsvDataSource } from '../core/piece/arpeggio/csv-data-source.js';
import { writeFileSync, mkdirSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import { randomUUID } from 'node:crypto';
describe('parseCsv', () => {
it('should parse simple CSV content', () => {
const csv = 'name,age\nAlice,30\nBob,25';
const result = parseCsv(csv);
expect(result).toEqual([
['name', 'age'],
['Alice', '30'],
['Bob', '25'],
]);
});
it('should handle quoted fields', () => {
const csv = 'name,description\nAlice,"Hello, World"\nBob,"Line1"';
const result = parseCsv(csv);
expect(result).toEqual([
['name', 'description'],
['Alice', 'Hello, World'],
['Bob', 'Line1'],
]);
});
it('should handle escaped quotes (double quotes)', () => {
const csv = 'name,value\nAlice,"He said ""hello"""\nBob,simple';
const result = parseCsv(csv);
expect(result).toEqual([
['name', 'value'],
['Alice', 'He said "hello"'],
['Bob', 'simple'],
]);
});
it('should handle CRLF line endings', () => {
const csv = 'name,age\r\nAlice,30\r\nBob,25';
const result = parseCsv(csv);
expect(result).toEqual([
['name', 'age'],
['Alice', '30'],
['Bob', '25'],
]);
});
it('should handle bare CR line endings', () => {
const csv = 'name,age\rAlice,30\rBob,25';
const result = parseCsv(csv);
expect(result).toEqual([
['name', 'age'],
['Alice', '30'],
['Bob', '25'],
]);
});
it('should handle empty fields', () => {
const csv = 'a,b,c\n1,,3\n,,';
const result = parseCsv(csv);
expect(result).toEqual([
['a', 'b', 'c'],
['1', '', '3'],
['', '', ''],
]);
});
it('should handle newlines within quoted fields', () => {
const csv = 'name,bio\nAlice,"Line1\nLine2"\nBob,simple';
const result = parseCsv(csv);
expect(result).toEqual([
['name', 'bio'],
['Alice', 'Line1\nLine2'],
['Bob', 'simple'],
]);
});
});
describe('CsvDataSource', () => {
function createTempCsv(content: string): string {
const dir = join(tmpdir(), `takt-csv-test-${randomUUID()}`);
mkdirSync(dir, { recursive: true });
const filePath = join(dir, 'test.csv');
writeFileSync(filePath, content, 'utf-8');
return filePath;
}
it('should read batches with batch_size 1', async () => {
const filePath = createTempCsv('name,age\nAlice,30\nBob,25\nCharlie,35');
const source = new CsvDataSource(filePath);
const batches = await source.readBatches(1);
expect(batches).toHaveLength(3);
expect(batches[0]!.rows).toEqual([{ name: 'Alice', age: '30' }]);
expect(batches[0]!.batchIndex).toBe(0);
expect(batches[0]!.totalBatches).toBe(3);
expect(batches[1]!.rows).toEqual([{ name: 'Bob', age: '25' }]);
expect(batches[2]!.rows).toEqual([{ name: 'Charlie', age: '35' }]);
});
it('should read batches with batch_size 2', async () => {
const filePath = createTempCsv('name,age\nAlice,30\nBob,25\nCharlie,35');
const source = new CsvDataSource(filePath);
const batches = await source.readBatches(2);
expect(batches).toHaveLength(2);
expect(batches[0]!.rows).toEqual([
{ name: 'Alice', age: '30' },
{ name: 'Bob', age: '25' },
]);
expect(batches[0]!.totalBatches).toBe(2);
expect(batches[1]!.rows).toEqual([
{ name: 'Charlie', age: '35' },
]);
});
it('should throw when CSV has no data rows', async () => {
const filePath = createTempCsv('name,age');
const source = new CsvDataSource(filePath);
await expect(source.readBatches(1)).rejects.toThrow('CSV file has no data rows');
});
it('should handle missing columns by returning empty string', async () => {
const filePath = createTempCsv('a,b,c\n1,2\n3');
const source = new CsvDataSource(filePath);
const batches = await source.readBatches(1);
expect(batches[0]!.rows).toEqual([{ a: '1', b: '2', c: '' }]);
expect(batches[1]!.rows).toEqual([{ a: '3', b: '', c: '' }]);
});
});

View File

@ -0,0 +1,50 @@
/**
* Tests for the arpeggio data source factory.
*
* Covers:
* - Built-in 'csv' source type returns CsvDataSource
* - Custom module: valid default export returns a data source
* - Custom module: non-function default export throws
* - Custom module: missing default export throws
*/
import { describe, it, expect } from 'vitest';
import { createDataSource } from '../core/piece/arpeggio/data-source-factory.js';
import { CsvDataSource } from '../core/piece/arpeggio/csv-data-source.js';
describe('createDataSource', () => {
it('should return a CsvDataSource for built-in "csv" type', async () => {
const source = await createDataSource('csv', '/path/to/data.csv');
expect(source).toBeInstanceOf(CsvDataSource);
});
it('should return a valid data source from a custom module with correct default export', async () => {
const tempModulePath = new URL(
'data:text/javascript,export default function(path) { return { readBatches: async () => [] }; }'
).href;
const source = await createDataSource(tempModulePath, '/some/path');
expect(source).toBeDefined();
expect(typeof source.readBatches).toBe('function');
});
it('should throw when custom module does not export a default function', async () => {
const tempModulePath = new URL(
'data:text/javascript,export default "not-a-function"'
).href;
await expect(createDataSource(tempModulePath, '/some/path')).rejects.toThrow(
/must export a default factory function/
);
});
it('should throw when custom module has no default export', async () => {
const tempModulePath = new URL(
'data:text/javascript,export const foo = 42'
).href;
await expect(createDataSource(tempModulePath, '/some/path')).rejects.toThrow(
/must export a default factory function/
);
});
});

View File

@ -0,0 +1,108 @@
/**
* Tests for arpeggio merge processing.
*/
import { describe, it, expect } from 'vitest';
import { buildMergeFn } from '../core/piece/arpeggio/merge.js';
import type { ArpeggioMergeMovementConfig } from '../core/piece/arpeggio/types.js';
import type { BatchResult } from '../core/piece/arpeggio/types.js';
function makeResult(batchIndex: number, content: string, success = true): BatchResult {
return { batchIndex, content, success };
}
function makeFailedResult(batchIndex: number, error: string): BatchResult {
return { batchIndex, content: '', success: false, error };
}
describe('buildMergeFn', () => {
describe('concat strategy', () => {
it('should concatenate results with default separator (newline)', async () => {
const config: ArpeggioMergeMovementConfig = { strategy: 'concat' };
const mergeFn = await buildMergeFn(config);
const results = [
makeResult(0, 'Result A'),
makeResult(1, 'Result B'),
makeResult(2, 'Result C'),
];
expect(mergeFn(results)).toBe('Result A\nResult B\nResult C');
});
it('should concatenate results with custom separator', async () => {
const config: ArpeggioMergeMovementConfig = { strategy: 'concat', separator: '\n---\n' };
const mergeFn = await buildMergeFn(config);
const results = [
makeResult(0, 'A'),
makeResult(1, 'B'),
];
expect(mergeFn(results)).toBe('A\n---\nB');
});
it('should sort results by batch index', async () => {
const config: ArpeggioMergeMovementConfig = { strategy: 'concat' };
const mergeFn = await buildMergeFn(config);
const results = [
makeResult(2, 'C'),
makeResult(0, 'A'),
makeResult(1, 'B'),
];
expect(mergeFn(results)).toBe('A\nB\nC');
});
it('should filter out failed results', async () => {
const config: ArpeggioMergeMovementConfig = { strategy: 'concat' };
const mergeFn = await buildMergeFn(config);
const results = [
makeResult(0, 'A'),
makeFailedResult(1, 'oops'),
makeResult(2, 'C'),
];
expect(mergeFn(results)).toBe('A\nC');
});
it('should return empty string when all results failed', async () => {
const config: ArpeggioMergeMovementConfig = { strategy: 'concat' };
const mergeFn = await buildMergeFn(config);
const results = [
makeFailedResult(0, 'error1'),
makeFailedResult(1, 'error2'),
];
expect(mergeFn(results)).toBe('');
});
});
describe('custom strategy with inline_js', () => {
it('should execute inline JS merge function', async () => {
const config: ArpeggioMergeMovementConfig = {
strategy: 'custom',
inlineJs: 'return results.filter(r => r.success).map(r => r.content.toUpperCase()).join(", ");',
};
const mergeFn = await buildMergeFn(config);
const results = [
makeResult(0, 'hello'),
makeResult(1, 'world'),
];
expect(mergeFn(results)).toBe('HELLO, WORLD');
});
it('should throw when inline JS returns non-string', async () => {
const config: ArpeggioMergeMovementConfig = {
strategy: 'custom',
inlineJs: 'return 42;',
};
const mergeFn = await buildMergeFn(config);
expect(() => mergeFn([makeResult(0, 'test')])).toThrow(
'Inline JS merge function must return a string, got number'
);
});
});
describe('custom strategy validation', () => {
it('should throw when custom strategy has neither inline_js nor file', async () => {
const config: ArpeggioMergeMovementConfig = { strategy: 'custom' };
await expect(buildMergeFn(config)).rejects.toThrow(
'Custom merge strategy requires either inline_js or file path'
);
});
});
});

View File

@ -0,0 +1,332 @@
/**
* Tests for Arpeggio-related Zod schemas.
*
* Covers:
* - ArpeggioMergeRawSchema cross-validation (.refine())
* - ArpeggioConfigRawSchema required fields and defaults
* - PieceMovementRawSchema with arpeggio field
*/
import { describe, it, expect } from 'vitest';
import {
ArpeggioMergeRawSchema,
ArpeggioConfigRawSchema,
PieceMovementRawSchema,
} from '../core/models/index.js';
describe('ArpeggioMergeRawSchema', () => {
it('should accept concat strategy without inline_js or file', () => {
const result = ArpeggioMergeRawSchema.safeParse({
strategy: 'concat',
});
expect(result.success).toBe(true);
});
it('should accept concat strategy with separator', () => {
const result = ArpeggioMergeRawSchema.safeParse({
strategy: 'concat',
separator: '\n---\n',
});
expect(result.success).toBe(true);
if (result.success) {
expect(result.data.separator).toBe('\n---\n');
}
});
it('should default strategy to concat when omitted', () => {
const result = ArpeggioMergeRawSchema.safeParse({});
expect(result.success).toBe(true);
if (result.success) {
expect(result.data.strategy).toBe('concat');
}
});
it('should accept custom strategy with inline_js', () => {
const result = ArpeggioMergeRawSchema.safeParse({
strategy: 'custom',
inline_js: 'return results.map(r => r.content).join(",");',
});
expect(result.success).toBe(true);
});
it('should accept custom strategy with file', () => {
const result = ArpeggioMergeRawSchema.safeParse({
strategy: 'custom',
file: './merge.js',
});
expect(result.success).toBe(true);
});
it('should reject custom strategy without inline_js or file', () => {
const result = ArpeggioMergeRawSchema.safeParse({
strategy: 'custom',
});
expect(result.success).toBe(false);
});
it('should reject concat strategy with inline_js', () => {
const result = ArpeggioMergeRawSchema.safeParse({
strategy: 'concat',
inline_js: 'return "hello";',
});
expect(result.success).toBe(false);
});
it('should reject concat strategy with file', () => {
const result = ArpeggioMergeRawSchema.safeParse({
strategy: 'concat',
file: './merge.js',
});
expect(result.success).toBe(false);
});
it('should reject invalid strategy value', () => {
const result = ArpeggioMergeRawSchema.safeParse({
strategy: 'invalid',
});
expect(result.success).toBe(false);
});
});
describe('ArpeggioConfigRawSchema', () => {
const validConfig = {
source: 'csv',
source_path: './data.csv',
template: './template.md',
};
it('should accept a valid minimal config', () => {
const result = ArpeggioConfigRawSchema.safeParse(validConfig);
expect(result.success).toBe(true);
});
it('should apply default values for optional fields', () => {
const result = ArpeggioConfigRawSchema.safeParse(validConfig);
expect(result.success).toBe(true);
if (result.success) {
expect(result.data.batch_size).toBe(1);
expect(result.data.concurrency).toBe(1);
expect(result.data.max_retries).toBe(2);
expect(result.data.retry_delay_ms).toBe(1000);
}
});
it('should accept explicit values overriding defaults', () => {
const result = ArpeggioConfigRawSchema.safeParse({
...validConfig,
batch_size: 5,
concurrency: 3,
max_retries: 4,
retry_delay_ms: 2000,
});
expect(result.success).toBe(true);
if (result.success) {
expect(result.data.batch_size).toBe(5);
expect(result.data.concurrency).toBe(3);
expect(result.data.max_retries).toBe(4);
expect(result.data.retry_delay_ms).toBe(2000);
}
});
it('should accept config with merge field', () => {
const result = ArpeggioConfigRawSchema.safeParse({
...validConfig,
merge: { strategy: 'concat', separator: '---' },
});
expect(result.success).toBe(true);
});
it('should accept config with output_path', () => {
const result = ArpeggioConfigRawSchema.safeParse({
...validConfig,
output_path: './output.txt',
});
expect(result.success).toBe(true);
if (result.success) {
expect(result.data.output_path).toBe('./output.txt');
}
});
it('should reject when source is empty', () => {
const result = ArpeggioConfigRawSchema.safeParse({
...validConfig,
source: '',
});
expect(result.success).toBe(false);
});
it('should reject when source is missing', () => {
const { source: _, ...noSource } = validConfig;
const result = ArpeggioConfigRawSchema.safeParse(noSource);
expect(result.success).toBe(false);
});
it('should reject when source_path is empty', () => {
const result = ArpeggioConfigRawSchema.safeParse({
...validConfig,
source_path: '',
});
expect(result.success).toBe(false);
});
it('should reject when source_path is missing', () => {
const { source_path: _, ...noSourcePath } = validConfig;
const result = ArpeggioConfigRawSchema.safeParse(noSourcePath);
expect(result.success).toBe(false);
});
it('should reject when template is empty', () => {
const result = ArpeggioConfigRawSchema.safeParse({
...validConfig,
template: '',
});
expect(result.success).toBe(false);
});
it('should reject when template is missing', () => {
const { template: _, ...noTemplate } = validConfig;
const result = ArpeggioConfigRawSchema.safeParse(noTemplate);
expect(result.success).toBe(false);
});
it('should reject batch_size of 0', () => {
const result = ArpeggioConfigRawSchema.safeParse({
...validConfig,
batch_size: 0,
});
expect(result.success).toBe(false);
});
it('should reject negative batch_size', () => {
const result = ArpeggioConfigRawSchema.safeParse({
...validConfig,
batch_size: -1,
});
expect(result.success).toBe(false);
});
it('should reject concurrency of 0', () => {
const result = ArpeggioConfigRawSchema.safeParse({
...validConfig,
concurrency: 0,
});
expect(result.success).toBe(false);
});
it('should reject negative concurrency', () => {
const result = ArpeggioConfigRawSchema.safeParse({
...validConfig,
concurrency: -1,
});
expect(result.success).toBe(false);
});
it('should reject negative max_retries', () => {
const result = ArpeggioConfigRawSchema.safeParse({
...validConfig,
max_retries: -1,
});
expect(result.success).toBe(false);
});
it('should accept max_retries of 0', () => {
const result = ArpeggioConfigRawSchema.safeParse({
...validConfig,
max_retries: 0,
});
expect(result.success).toBe(true);
if (result.success) {
expect(result.data.max_retries).toBe(0);
}
});
it('should reject non-integer batch_size', () => {
const result = ArpeggioConfigRawSchema.safeParse({
...validConfig,
batch_size: 1.5,
});
expect(result.success).toBe(false);
});
});
describe('PieceMovementRawSchema with arpeggio', () => {
it('should accept a movement with arpeggio config', () => {
const raw = {
name: 'batch-process',
arpeggio: {
source: 'csv',
source_path: './data.csv',
template: './prompt.md',
},
};
const result = PieceMovementRawSchema.safeParse(raw);
expect(result.success).toBe(true);
if (result.success) {
expect(result.data.arpeggio).toBeDefined();
expect(result.data.arpeggio!.source).toBe('csv');
}
});
it('should accept a movement with arpeggio and rules', () => {
const raw = {
name: 'batch-process',
arpeggio: {
source: 'csv',
source_path: './data.csv',
template: './prompt.md',
batch_size: 2,
concurrency: 3,
},
rules: [
{ condition: 'All processed', next: 'COMPLETE' },
{ condition: 'Errors found', next: 'fix' },
],
};
const result = PieceMovementRawSchema.safeParse(raw);
expect(result.success).toBe(true);
if (result.success) {
expect(result.data.arpeggio!.batch_size).toBe(2);
expect(result.data.arpeggio!.concurrency).toBe(3);
expect(result.data.rules).toHaveLength(2);
}
});
it('should accept a movement without arpeggio (normal movement)', () => {
const raw = {
name: 'normal-step',
persona: 'coder.md',
instruction_template: 'Do work',
};
const result = PieceMovementRawSchema.safeParse(raw);
expect(result.success).toBe(true);
if (result.success) {
expect(result.data.arpeggio).toBeUndefined();
}
});
it('should accept a movement with arpeggio including custom merge', () => {
const raw = {
name: 'custom-merge-step',
arpeggio: {
source: 'csv',
source_path: './data.csv',
template: './prompt.md',
merge: {
strategy: 'custom',
inline_js: 'return results.map(r => r.content).join(", ");',
},
output_path: './output.txt',
},
};
const result = PieceMovementRawSchema.safeParse(raw);
expect(result.success).toBe(true);
if (result.success) {
expect(result.data.arpeggio!.merge).toBeDefined();
expect(result.data.arpeggio!.output_path).toBe('./output.txt');
}
});
});

View File

@ -0,0 +1,83 @@
/**
* Tests for arpeggio template expansion.
*/
import { describe, it, expect } from 'vitest';
import { expandTemplate } from '../core/piece/arpeggio/template.js';
import type { DataBatch } from '../core/piece/arpeggio/types.js';
function makeBatch(rows: Record<string, string>[], batchIndex = 0, totalBatches = 1): DataBatch {
return { rows, batchIndex, totalBatches };
}
describe('expandTemplate', () => {
it('should expand {line:1} with formatted row data', () => {
const batch = makeBatch([{ name: 'Alice', age: '30' }]);
const result = expandTemplate('Process this: {line:1}', batch);
expect(result).toBe('Process this: name: Alice\nage: 30');
});
it('should expand {line:1} and {line:2} for multi-row batches', () => {
const batch = makeBatch([
{ name: 'Alice', age: '30' },
{ name: 'Bob', age: '25' },
]);
const result = expandTemplate('Row 1: {line:1}\nRow 2: {line:2}', batch);
expect(result).toBe('Row 1: name: Alice\nage: 30\nRow 2: name: Bob\nage: 25');
});
it('should expand {col:N:name} with specific column values', () => {
const batch = makeBatch([{ name: 'Alice', age: '30', city: 'Tokyo' }]);
const result = expandTemplate('Name: {col:1:name}, City: {col:1:city}', batch);
expect(result).toBe('Name: Alice, City: Tokyo');
});
it('should expand {batch_index} and {total_batches}', () => {
const batch = makeBatch([{ name: 'Alice' }], 2, 5);
const result = expandTemplate('Batch {batch_index} of {total_batches}', batch);
expect(result).toBe('Batch 2 of 5');
});
it('should expand all placeholder types in a single template', () => {
const batch = makeBatch([
{ name: 'Alice', role: 'dev' },
{ name: 'Bob', role: 'pm' },
], 0, 3);
const template = 'Batch {batch_index}/{total_batches}\nFirst: {col:1:name}\nSecond: {line:2}';
const result = expandTemplate(template, batch);
expect(result).toBe('Batch 0/3\nFirst: Alice\nSecond: name: Bob\nrole: pm');
});
it('should throw when {line:N} references out-of-range row', () => {
const batch = makeBatch([{ name: 'Alice' }]);
expect(() => expandTemplate('{line:2}', batch)).toThrow(
'Template placeholder {line:2} references row 2 but batch has 1 rows'
);
});
it('should throw when {col:N:name} references out-of-range row', () => {
const batch = makeBatch([{ name: 'Alice' }]);
expect(() => expandTemplate('{col:2:name}', batch)).toThrow(
'Template placeholder {col:2:name} references row 2 but batch has 1 rows'
);
});
it('should throw when {col:N:name} references unknown column', () => {
const batch = makeBatch([{ name: 'Alice' }]);
expect(() => expandTemplate('{col:1:missing}', batch)).toThrow(
'Template placeholder {col:1:missing} references unknown column "missing"'
);
});
it('should handle templates with no placeholders', () => {
const batch = makeBatch([{ name: 'Alice' }]);
const result = expandTemplate('No placeholders here', batch);
expect(result).toBe('No placeholders here');
});
it('should handle multiple occurrences of the same placeholder', () => {
const batch = makeBatch([{ name: 'Alice' }], 1, 3);
const result = expandTemplate('{batch_index} and {batch_index}', batch);
expect(result).toBe('1 and 1');
});
});

View File

@ -11,6 +11,11 @@ import { describe, it, expect, vi, beforeEach } from 'vitest';
vi.mock('../shared/ui/index.js', () => ({ vi.mock('../shared/ui/index.js', () => ({
info: vi.fn(), info: vi.fn(),
error: vi.fn(), error: vi.fn(),
withProgress: vi.fn(async (_start, _done, operation) => operation()),
}));
vi.mock('../shared/prompt/index.js', () => ({
confirm: vi.fn(() => true),
})); }));
vi.mock('../shared/utils/index.js', async (importOriginal) => ({ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
@ -46,6 +51,7 @@ vi.mock('../features/pipeline/index.js', () => ({
vi.mock('../features/interactive/index.js', () => ({ vi.mock('../features/interactive/index.js', () => ({
interactiveMode: vi.fn(), interactiveMode: vi.fn(),
selectInteractiveMode: vi.fn(() => 'assistant'), selectInteractiveMode: vi.fn(() => 'assistant'),
selectRecentSession: vi.fn(() => null),
passthroughMode: vi.fn(), passthroughMode: vi.fn(),
quietMode: vi.fn(), quietMode: vi.fn(),
personaMode: vi.fn(), personaMode: vi.fn(),
@ -83,8 +89,10 @@ vi.mock('../app/cli/helpers.js', () => ({
})); }));
import { checkGhCli, fetchIssue, formatIssueAsTask, parseIssueNumbers } from '../infra/github/issue.js'; import { checkGhCli, fetchIssue, formatIssueAsTask, parseIssueNumbers } from '../infra/github/issue.js';
import { selectAndExecuteTask, determinePiece } from '../features/tasks/index.js'; import { selectAndExecuteTask, determinePiece, createIssueFromTask, saveTaskFromInteractive } from '../features/tasks/index.js';
import { interactiveMode } from '../features/interactive/index.js'; import { interactiveMode, selectRecentSession } from '../features/interactive/index.js';
import { loadGlobalConfig } from '../infra/config/index.js';
import { confirm } from '../shared/prompt/index.js';
import { isDirectTask } from '../app/cli/helpers.js'; import { isDirectTask } from '../app/cli/helpers.js';
import { executeDefaultAction } from '../app/cli/routing.js'; import { executeDefaultAction } from '../app/cli/routing.js';
import type { GitHubIssue } from '../infra/github/types.js'; import type { GitHubIssue } from '../infra/github/types.js';
@ -95,7 +103,12 @@ const mockFormatIssueAsTask = vi.mocked(formatIssueAsTask);
const mockParseIssueNumbers = vi.mocked(parseIssueNumbers); const mockParseIssueNumbers = vi.mocked(parseIssueNumbers);
const mockSelectAndExecuteTask = vi.mocked(selectAndExecuteTask); const mockSelectAndExecuteTask = vi.mocked(selectAndExecuteTask);
const mockDeterminePiece = vi.mocked(determinePiece); const mockDeterminePiece = vi.mocked(determinePiece);
const mockCreateIssueFromTask = vi.mocked(createIssueFromTask);
const mockSaveTaskFromInteractive = vi.mocked(saveTaskFromInteractive);
const mockInteractiveMode = vi.mocked(interactiveMode); const mockInteractiveMode = vi.mocked(interactiveMode);
const mockSelectRecentSession = vi.mocked(selectRecentSession);
const mockLoadGlobalConfig = vi.mocked(loadGlobalConfig);
const mockConfirm = vi.mocked(confirm);
const mockIsDirectTask = vi.mocked(isDirectTask); const mockIsDirectTask = vi.mocked(isDirectTask);
function createMockIssue(number: number): GitHubIssue { function createMockIssue(number: number): GitHubIssue {
@ -117,6 +130,7 @@ beforeEach(() => {
// Default setup // Default setup
mockDeterminePiece.mockResolvedValue('default'); mockDeterminePiece.mockResolvedValue('default');
mockInteractiveMode.mockResolvedValue({ action: 'execute', task: 'summarized task' }); mockInteractiveMode.mockResolvedValue({ action: 'execute', task: 'summarized task' });
mockConfirm.mockResolvedValue(true);
mockIsDirectTask.mockReturnValue(false); mockIsDirectTask.mockReturnValue(false);
mockParseIssueNumbers.mockReturnValue([]); mockParseIssueNumbers.mockReturnValue([]);
}); });
@ -142,6 +156,7 @@ describe('Issue resolution in routing', () => {
'/test/cwd', '/test/cwd',
'## GitHub Issue #131: Issue #131', '## GitHub Issue #131: Issue #131',
expect.anything(), expect.anything(),
undefined,
); );
// Then: selectAndExecuteTask should receive issues in options // Then: selectAndExecuteTask should receive issues in options
@ -194,6 +209,7 @@ describe('Issue resolution in routing', () => {
'/test/cwd', '/test/cwd',
'## GitHub Issue #131: Issue #131', '## GitHub Issue #131: Issue #131',
expect.anything(), expect.anything(),
undefined,
); );
// Then: selectAndExecuteTask should receive issues // Then: selectAndExecuteTask should receive issues
@ -218,6 +234,7 @@ describe('Issue resolution in routing', () => {
'/test/cwd', '/test/cwd',
'refactor the code', 'refactor the code',
expect.anything(), expect.anything(),
undefined,
); );
// Then: no issue fetching should occur // Then: no issue fetching should occur
@ -237,6 +254,7 @@ describe('Issue resolution in routing', () => {
'/test/cwd', '/test/cwd',
undefined, undefined,
expect.anything(), expect.anything(),
undefined,
); );
// Then: no issue fetching should occur // Then: no issue fetching should occur
@ -261,4 +279,112 @@ describe('Issue resolution in routing', () => {
expect(mockSelectAndExecuteTask).not.toHaveBeenCalled(); expect(mockSelectAndExecuteTask).not.toHaveBeenCalled();
}); });
}); });
describe('create_issue action', () => {
it('should create issue first, then delegate final confirmation to saveTaskFromInteractive', async () => {
// Given
mockInteractiveMode.mockResolvedValue({ action: 'create_issue', task: 'New feature request' });
mockCreateIssueFromTask.mockReturnValue(226);
// When
await executeDefaultAction();
// Then: issue is created first
expect(mockCreateIssueFromTask).toHaveBeenCalledWith('New feature request');
// Then: saveTaskFromInteractive receives final confirmation message
expect(mockSaveTaskFromInteractive).toHaveBeenCalledWith(
'/test/cwd',
'New feature request',
'default',
{ issue: 226, confirmAtEndMessage: 'Add this issue to tasks?' },
);
});
it('should skip confirmation and task save when issue creation fails', async () => {
// Given
mockInteractiveMode.mockResolvedValue({ action: 'create_issue', task: 'New feature request' });
mockCreateIssueFromTask.mockReturnValue(undefined);
// When
await executeDefaultAction();
// Then
expect(mockCreateIssueFromTask).toHaveBeenCalledWith('New feature request');
expect(mockSaveTaskFromInteractive).not.toHaveBeenCalled();
});
it('should not call selectAndExecuteTask when create_issue action is chosen', async () => {
// Given
mockInteractiveMode.mockResolvedValue({ action: 'create_issue', task: 'New feature request' });
// When
await executeDefaultAction();
// Then: selectAndExecuteTask should NOT be called
expect(mockSelectAndExecuteTask).not.toHaveBeenCalled();
});
});
describe('session selection with provider=claude', () => {
it('should pass selected session ID to interactiveMode when provider is claude', async () => {
// Given
mockLoadGlobalConfig.mockReturnValue({ interactivePreviewMovements: 3, provider: 'claude' });
mockConfirm.mockResolvedValue(true);
mockSelectRecentSession.mockResolvedValue('session-xyz');
// When
await executeDefaultAction();
// Then: selectRecentSession should be called
expect(mockSelectRecentSession).toHaveBeenCalledWith('/test/cwd', 'en');
// Then: interactiveMode should receive the session ID as 4th argument
expect(mockInteractiveMode).toHaveBeenCalledWith(
'/test/cwd',
undefined,
expect.anything(),
'session-xyz',
);
expect(mockConfirm).toHaveBeenCalledWith('Choose a previous session?', false);
});
it('should not call selectRecentSession when user selects no in confirmation', async () => {
// Given
mockLoadGlobalConfig.mockReturnValue({ interactivePreviewMovements: 3, provider: 'claude' });
mockConfirm.mockResolvedValue(false);
// When
await executeDefaultAction();
// Then
expect(mockConfirm).toHaveBeenCalledWith('Choose a previous session?', false);
expect(mockSelectRecentSession).not.toHaveBeenCalled();
expect(mockInteractiveMode).toHaveBeenCalledWith(
'/test/cwd',
undefined,
expect.anything(),
undefined,
);
});
it('should not call selectRecentSession when provider is not claude', async () => {
// Given
mockLoadGlobalConfig.mockReturnValue({ interactivePreviewMovements: 3, provider: 'openai' });
// When
await executeDefaultAction();
// Then: selectRecentSession should NOT be called
expect(mockSelectRecentSession).not.toHaveBeenCalled();
// Then: interactiveMode should be called with undefined session ID
expect(mockInteractiveMode).toHaveBeenCalledWith(
'/test/cwd',
undefined,
expect.anything(),
undefined,
);
});
});
}); });

View File

@ -28,14 +28,23 @@ vi.mock('../infra/task/summarize.js', () => ({
summarizeTaskName: vi.fn(), summarizeTaskName: vi.fn(),
})); }));
vi.mock('../shared/ui/index.js', () => ({ vi.mock('../shared/ui/index.js', () => {
info: vi.fn(), const info = vi.fn();
error: vi.fn(), return {
success: vi.fn(), info,
header: vi.fn(), error: vi.fn(),
status: vi.fn(), success: vi.fn(),
setLogLevel: vi.fn(), header: vi.fn(),
})); status: vi.fn(),
setLogLevel: vi.fn(),
withProgress: vi.fn(async (start, done, operation) => {
info(start);
const result = await operation();
info(typeof done === 'function' ? done(result) : done);
return result;
}),
};
});
vi.mock('../shared/utils/index.js', async (importOriginal) => ({ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
...(await importOriginal<Record<string, unknown>>()), ...(await importOriginal<Record<string, unknown>>()),
@ -199,6 +208,7 @@ describe('confirmAndCreateWorktree', () => {
// Then // Then
expect(mockInfo).toHaveBeenCalledWith('Generating branch name...'); expect(mockInfo).toHaveBeenCalledWith('Generating branch name...');
expect(mockInfo).toHaveBeenCalledWith('Branch name generated: test-task');
}); });
it('should skip prompt when override is false', async () => { it('should skip prompt when override is false', async () => {

View File

@ -188,7 +188,7 @@ describe('loadAllPieces', () => {
const samplePiece = ` const samplePiece = `
name: test-piece name: test-piece
description: Test piece description: Test piece
max_iterations: 10 max_movements: 10
movements: movements:
- name: step1 - name: step1
persona: coder persona: coder

View File

@ -114,6 +114,42 @@ describe('createIssueFromTask', () => {
expect(mockSuccess).not.toHaveBeenCalled(); expect(mockSuccess).not.toHaveBeenCalled();
}); });
describe('return value', () => {
it('should return issue number when creation succeeds', () => {
// Given
mockCreateIssue.mockReturnValue({ success: true, url: 'https://github.com/owner/repo/issues/42' });
// When
const result = createIssueFromTask('Test task');
// Then
expect(result).toBe(42);
});
it('should return undefined when creation fails', () => {
// Given
mockCreateIssue.mockReturnValue({ success: false, error: 'auth failed' });
// When
const result = createIssueFromTask('Test task');
// Then
expect(result).toBeUndefined();
});
it('should return undefined and display error when URL has non-numeric suffix', () => {
// Given
mockCreateIssue.mockReturnValue({ success: true, url: 'https://github.com/owner/repo/issues/abc' });
// When
const result = createIssueFromTask('Test task');
// Then
expect(result).toBeUndefined();
expect(mockError).toHaveBeenCalledWith('Failed to extract issue number from URL');
});
});
it('should use first line as title and full text as body for multi-line task', () => { it('should use first line as title and full text as body for multi-line task', () => {
// Given: multi-line task // Given: multi-line task
const task = 'First line title\nSecond line details\nThird line more info'; const task = 'First line title\nSecond line details\nThird line more info';

View File

@ -63,7 +63,7 @@ describe('debug logging', () => {
} }
}); });
it('should write debug log to project .takt/logs/ directory', () => { it('should write debug log to project .takt/runs/*/logs/ directory', () => {
const projectDir = join(tmpdir(), 'takt-test-debug-project-' + Date.now()); const projectDir = join(tmpdir(), 'takt-test-debug-project-' + Date.now());
mkdirSync(projectDir, { recursive: true }); mkdirSync(projectDir, { recursive: true });
@ -71,7 +71,9 @@ describe('debug logging', () => {
initDebugLogger({ enabled: true }, projectDir); initDebugLogger({ enabled: true }, projectDir);
const logFile = getDebugLogFile(); const logFile = getDebugLogFile();
expect(logFile).not.toBeNull(); expect(logFile).not.toBeNull();
expect(logFile!).toContain(join(projectDir, '.takt', 'logs')); expect(logFile!).toContain(join(projectDir, '.takt', 'runs'));
expect(logFile!).toContain(`${join(projectDir, '.takt', 'runs')}/`);
expect(logFile!).toContain('/logs/');
expect(logFile!).toMatch(/debug-.*\.log$/); expect(logFile!).toMatch(/debug-.*\.log$/);
expect(existsSync(logFile!)).toBe(true); expect(existsSync(logFile!)).toBe(true);
} finally { } finally {
@ -86,7 +88,8 @@ describe('debug logging', () => {
try { try {
initDebugLogger({ enabled: true }, projectDir); initDebugLogger({ enabled: true }, projectDir);
const promptsLogFile = resolvePromptsLogFilePath(); const promptsLogFile = resolvePromptsLogFilePath();
expect(promptsLogFile).toContain(join(projectDir, '.takt', 'logs')); expect(promptsLogFile).toContain(join(projectDir, '.takt', 'runs'));
expect(promptsLogFile).toContain('/logs/');
expect(promptsLogFile).toMatch(/debug-.*-prompts\.jsonl$/); expect(promptsLogFile).toMatch(/debug-.*-prompts\.jsonl$/);
expect(existsSync(promptsLogFile)).toBe(true); expect(existsSync(promptsLogFile)).toBe(true);
} finally { } finally {

View File

@ -1,6 +1,11 @@
import { describe, it, expect, afterEach } from 'vitest'; import { describe, it, expect, afterEach } from 'vitest';
import { readFileSync, writeFileSync } from 'node:fs';
import { parse as parseYaml } from 'yaml';
import { injectProviderArgs } from '../../e2e/helpers/takt-runner.js'; import { injectProviderArgs } from '../../e2e/helpers/takt-runner.js';
import { createIsolatedEnv } from '../../e2e/helpers/isolated-env.js'; import {
createIsolatedEnv,
updateIsolatedConfig,
} from '../../e2e/helpers/isolated-env.js';
describe('injectProviderArgs', () => { describe('injectProviderArgs', () => {
it('should prepend --provider when provider is specified', () => { it('should prepend --provider when provider is specified', () => {
@ -70,4 +75,112 @@ describe('createIsolatedEnv', () => {
expect(isolated.env.GIT_CONFIG_GLOBAL).toBeDefined(); expect(isolated.env.GIT_CONFIG_GLOBAL).toBeDefined();
expect(isolated.env.GIT_CONFIG_GLOBAL).toContain('takt-e2e-'); expect(isolated.env.GIT_CONFIG_GLOBAL).toContain('takt-e2e-');
}); });
it('should create config.yaml from E2E fixture with notification_sound timing controls', () => {
const isolated = createIsolatedEnv();
cleanups.push(isolated.cleanup);
const configRaw = readFileSync(`${isolated.taktDir}/config.yaml`, 'utf-8');
const config = parseYaml(configRaw) as Record<string, unknown>;
expect(config.language).toBe('en');
expect(config.log_level).toBe('info');
expect(config.default_piece).toBe('default');
expect(config.notification_sound).toBe(true);
expect(config.notification_sound_events).toEqual({
iteration_limit: false,
piece_complete: false,
piece_abort: false,
run_complete: true,
run_abort: true,
});
});
it('should override provider in config.yaml when TAKT_E2E_PROVIDER is set', () => {
process.env = { ...originalEnv, TAKT_E2E_PROVIDER: 'mock' };
const isolated = createIsolatedEnv();
cleanups.push(isolated.cleanup);
const configRaw = readFileSync(`${isolated.taktDir}/config.yaml`, 'utf-8');
const config = parseYaml(configRaw) as Record<string, unknown>;
expect(config.provider).toBe('mock');
});
it('should preserve base settings when updateIsolatedConfig applies patch', () => {
const isolated = createIsolatedEnv();
cleanups.push(isolated.cleanup);
updateIsolatedConfig(isolated.taktDir, {
provider: 'mock',
concurrency: 2,
});
const configRaw = readFileSync(`${isolated.taktDir}/config.yaml`, 'utf-8');
const config = parseYaml(configRaw) as Record<string, unknown>;
expect(config.provider).toBe('mock');
expect(config.concurrency).toBe(2);
expect(config.notification_sound).toBe(true);
expect(config.notification_sound_events).toEqual({
iteration_limit: false,
piece_complete: false,
piece_abort: false,
run_complete: true,
run_abort: true,
});
expect(config.language).toBe('en');
});
it('should deep-merge notification_sound_events patch and preserve unspecified keys', () => {
const isolated = createIsolatedEnv();
cleanups.push(isolated.cleanup);
updateIsolatedConfig(isolated.taktDir, {
notification_sound_events: {
run_complete: false,
},
});
const configRaw = readFileSync(`${isolated.taktDir}/config.yaml`, 'utf-8');
const config = parseYaml(configRaw) as Record<string, unknown>;
expect(config.notification_sound_events).toEqual({
iteration_limit: false,
piece_complete: false,
piece_abort: false,
run_complete: false,
run_abort: true,
});
});
it('should throw when patch.notification_sound_events is not an object', () => {
const isolated = createIsolatedEnv();
cleanups.push(isolated.cleanup);
expect(() => {
updateIsolatedConfig(isolated.taktDir, {
notification_sound_events: true,
});
}).toThrow('Invalid notification_sound_events in patch: expected object');
});
it('should throw when current config notification_sound_events is invalid', () => {
const isolated = createIsolatedEnv();
cleanups.push(isolated.cleanup);
writeFileSync(
`${isolated.taktDir}/config.yaml`,
[
'language: en',
'log_level: info',
'default_piece: default',
'notification_sound: true',
'notification_sound_events: true',
].join('\n'),
);
expect(() => {
updateIsolatedConfig(isolated.taktDir, { provider: 'mock' });
}).toThrow('Invalid notification_sound_events in current config: expected object');
});
}); });

Some files were not shown because too many files have changed in this diff Show More