Release v0.12.0 (#241)
* takt: github-issue-193-takt-add-issue (#199) * 一時的に追加 * github-issue-200-arpeggio (#203) * fix: stable release時にnext dist-tagを自動同期 * takt: github-issue-200-arpeggio * github-issue-201-completetask-completed-tasks-yaml (#202) * fix: stable release時にnext dist-tagを自動同期 * takt: github-issue-201-completetask-completed-tasks-yaml * takt: github-issue-204-takt-tasks (#205) * feat: frontend特化ピースを追加し並列arch-reviewを導入 * chore: pieceカテゴリのja/en並びと表記を整理 * takt: github-issue-207-previous-response-source-path (#210) * fix: callAiJudgeをプロバイダーシステム経由に変更(Codex対応) callAiJudgeがinfra/claude/にハードコードされており、Codexプロバイダー使用時に judge評価が動作しなかった。agents/ai-judge.tsに移動し、runAgent経由で プロバイダーを正しく解決するように修正。 * Release v0.11.1 * takt/#209/update review history logs (#213) * fix: callAiJudgeをプロバイダーシステム経由に変更(Codex対応) callAiJudgeがinfra/claude/にハードコードされており、Codexプロバイダー使用時に judge評価が動作しなかった。agents/ai-judge.tsに移動し、runAgent経由で プロバイダーを正しく解決するように修正。 * takt: github-issue-209 * takt: github-issue-198-e2e-config-yaml (#208) * takt: github-issue-194-takt-add (#206) * slug エージェントが暴走するのを対処 * 暴走抑止 * chore: add completion logs for branch and issue generation * progressをわかりやすくする * fix * test: add withProgress mock in selectAndExecute autoPr test * takt: github-issue-212-max-iteration-max-movement-ostinato (#217) * takt: github-issue-180-ai (#219) * takt: github-issue-163-report-phase-blocked (#218) * Issue 作成時にタスクを積むかを確認 * takt: opencode (#222) * takt: github-issue-192-e2e-test (#221) * takt: issue (#220) * ポート競合回避 * opencode 対応 * pass_previous_responseを復活 * takt: task-1770764964345 (#225) * opencode でプロンプトがechoされる問題を修正 * opencode がハングする問題を修正 * worktreeにタスク指示書をコピー * opencode の question を抑制 * Provider およびモデル名を出力 * fix: lint errors in merge/resolveTask/confirm * fix: opencode permission and tool wiring for edit execution * opencodeの終了判定が誤っていたので修正 * add e2e for opencode * add test * takt: github-issue-236-feat-claude-codex-opencode (#239) * takt: slackweb (#234) * takt: github-issue-238-fix-opencode (#240) * Release v0.12.0 * provider event log default false
This commit is contained in:
commit
86e80f33aa
39
CHANGELOG.md
39
CHANGELOG.md
@ -4,6 +4,45 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
|
||||
|
||||
## [0.12.0] - 2026-02-11
|
||||
|
||||
### Added
|
||||
|
||||
- **OpenCode プロバイダー**: 第3のプロバイダーとして OpenCode をネイティブサポート — `@opencode-ai/sdk/v2` による SDK 統合、権限マッピング(readonly/edit/full → reject/once/always)、SSE ストリーム処理、リトライ機構(最大3回)、10分タイムアウトによるハング検出 (#236, #238)
|
||||
- **Arpeggio ムーブメント**: データ駆動バッチ処理の新ムーブメントタイプ — CSV データソースからバッチ分割、テンプレート展開(`{line:N}`, `{col:N:name}`, `{batch_index}`)、並行 LLM 呼び出し(Semaphore 制御)、concat/custom マージ戦略をサポート (#200)
|
||||
- **`frontend` ビルトインピース**: フロントエンド開発特化のピースを新規追加 — React/Next.js 向けの knowledge 注入、coding/testing ポリシー適用、並列アーキテクチャレビュー対応
|
||||
- **Slack Webhook 通知**: ピース実行完了時に Slack へ自動通知 — `TAKT_NOTIFY_WEBHOOK` 環境変数で設定、10秒タイムアウト、失敗時も他処理をブロックしない (#234)
|
||||
- **セッション選択 UI**: インタラクティブモード開始時に Claude Code の過去セッションから再開可能なセッションを選択可能に — 最新10セッションの一覧表示、初期入力・最終応答プレビュー付き (#180)
|
||||
- **プロバイダーイベントログ**: Claude/Codex/OpenCode の実行中イベントを NDJSON 形式でファイル出力 — `.takt/logs/{sessionId}-provider-events.jsonl` に記録、長大テキストの自動圧縮 (#236)
|
||||
- **プロバイダー・モデル名の出力表示**: 各ムーブメント実行時に使用中のプロバイダーとモデル名をコンソールに表示
|
||||
|
||||
### Changed
|
||||
|
||||
- **`takt add` の刷新**: Issue 選択時にタスクへの自動追加、インタラクティブモードの廃止、Issue 作成時のタスク積み込み確認 (#193, #194)
|
||||
- **`max_iteration` → `max_movement` 統一**: イテレーション上限の用語を統一し、無限実行指定として `ostinato` を追加 (#212)
|
||||
- **`previous_response` 注入仕様の改善**: 長さ制御と Source Path 常時注入を実装 (#207)
|
||||
- **タスク管理の改善**: `.takt/tasks/` を長文タスク仕様の置き場所として再定義、`completeTask()` で completed レコードを `tasks.yaml` から削除 (#201, #204)
|
||||
- **レビュー出力の改善**: レビュー出力を最新化し、過去レポートは履歴ログへ分離 (#209)
|
||||
- **ビルトインピース簡素化**: 全ビルトインピースのトップレベル宣言をさらに整理
|
||||
|
||||
### Fixed
|
||||
|
||||
- **Report Phase blocked 時の動作修正**: Report Phase(Phase 2)で blocked 状態の際に新規セッションでリトライするよう修正 (#163)
|
||||
- **OpenCode のハング・終了判定の修正**: プロンプトのエコー抑制、question の抑制、ハング問題の修正、終了判定の誤りを修正 (#238)
|
||||
- **OpenCode の権限・ツール設定の修正**: edit 実行時の権限とツール配線を修正
|
||||
- **Worktree へのタスク指示書コピー**: Worktree 実行時にタスク指示書が正しくコピーされるよう修正
|
||||
- lint エラーの修正(merge/resolveTask/confirm)
|
||||
|
||||
### Internal
|
||||
|
||||
- OpenCode プロバイダーの包括的なテスト追加(client-cleanup, config, provider, stream-handler, types)
|
||||
- Arpeggio の包括的なテスト追加(csv, data-source-factory, merge, schema, template, engine-arpeggio)
|
||||
- E2E テストの大幅な拡充: cli-catalog, cli-clear, cli-config, cli-export-cc, cli-help, cli-prompt, cli-reset-categories, cli-switch, error-handling, piece-error-handling, provider-error, quiet-mode, run-multiple-tasks, task-content-file (#192, #198)
|
||||
- `providerEventLogger.ts`, `providerModel.ts`, `slackWebhook.ts`, `session-reader.ts`, `sessionSelector.ts`, `provider-resolution.ts`, `run-paths.ts` の新規追加
|
||||
- `ArpeggioRunner.ts` の新規追加(データ駆動バッチ処理エンジン)
|
||||
- AI Judge をプロバイダーシステム経由に変更(Codex/OpenCode 対応)
|
||||
- テスト追加・拡充: report-phase-blocked, phase-runner-report-history, judgment-fallback, pieceExecution-session-loading, globalConfig-defaults, session-reader, sessionSelector, slackWebhook, providerEventLogger, provider-model, interactive, run-paths, engine-test-helpers
|
||||
|
||||
## [0.11.1] - 2026-02-10
|
||||
|
||||
### Fixed
|
||||
|
||||
12
CLAUDE.md
12
CLAUDE.md
@ -218,7 +218,7 @@ Builtin resources are embedded in the npm package (`builtins/`). User files in `
|
||||
```yaml
|
||||
name: piece-name
|
||||
description: Optional description
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
initial_step: plan # First step to execute
|
||||
|
||||
steps:
|
||||
@ -291,7 +291,7 @@ Key points about parallel steps:
|
||||
|----------|-------------|
|
||||
| `{task}` | Original user request (auto-injected if not in template) |
|
||||
| `{iteration}` | Piece-wide iteration count |
|
||||
| `{max_iterations}` | Maximum iterations allowed |
|
||||
| `{max_movements}` | Maximum movements allowed |
|
||||
| `{step_iteration}` | Per-step iteration count |
|
||||
| `{previous_response}` | Previous step output (auto-injected if not in template) |
|
||||
| `{user_inputs}` | Accumulated user inputs (auto-injected if not in template) |
|
||||
@ -406,7 +406,7 @@ Key constraints:
|
||||
- **Ephemeral lifecycle**: Clone is created → task runs → auto-commit + push → clone is deleted. Branches are the single source of truth.
|
||||
- **Session isolation**: Claude Code sessions are stored per-cwd in `~/.claude/projects/{encoded-path}/`. Sessions from the main project cannot be resumed in a clone. The engine skips session resume when `cwd !== projectCwd`.
|
||||
- **No node_modules**: Clones only contain tracked files. `node_modules/` is absent.
|
||||
- **Dual cwd**: `cwd` = clone path (where agents run), `projectCwd` = project root. Reports write to `cwd/.takt/reports/` (clone) to prevent agents from discovering the main repository. Logs and session data write to `projectCwd`.
|
||||
- **Dual cwd**: `cwd` = clone path (where agents run), `projectCwd` = project root. Reports write to `cwd/.takt/runs/{slug}/reports/` (clone) to prevent agents from discovering the main repository. Logs and session data write to `projectCwd`.
|
||||
- **List**: Use `takt list` to list branches. Instruct action creates a temporary clone for the branch, executes, pushes, then removes the clone.
|
||||
|
||||
## Error Propagation
|
||||
@ -455,10 +455,10 @@ Debug logs are written to `.takt/logs/debug.log` (ndjson format). Log levels: `d
|
||||
- If persona file doesn't exist, the persona string is used as inline system prompt
|
||||
|
||||
**Report directory structure:**
|
||||
- Report dirs are created at `.takt/reports/{timestamp}-{slug}/`
|
||||
- Report dirs are created at `.takt/runs/{timestamp}-{slug}/reports/`
|
||||
- Report files specified in `step.report` are written relative to report dir
|
||||
- Report dir path is available as `{report_dir}` variable in instruction templates
|
||||
- When `cwd !== projectCwd` (worktree execution), reports write to `cwd/.takt/reports/` (clone dir) to prevent agents from discovering the main repository path
|
||||
- When `cwd !== projectCwd` (worktree execution), reports write to `cwd/.takt/runs/{slug}/reports/` (clone dir) to prevent agents from discovering the main repository path
|
||||
|
||||
**Session continuity across phases:**
|
||||
- Agent sessions persist across Phase 1 → Phase 2 → Phase 3 for context continuity
|
||||
@ -470,7 +470,7 @@ Debug logs are written to `.takt/logs/debug.log` (ndjson format). Log levels: `d
|
||||
- `git clone --shared` creates independent `.git` directory (not `git worktree`)
|
||||
- Clone cwd ≠ project cwd: agents work in clone, reports write to clone, logs write to project
|
||||
- Session resume is skipped when `cwd !== projectCwd` to avoid cross-directory contamination
|
||||
- Reports write to `cwd/.takt/reports/` (clone) to prevent agents from discovering the main repository path via instruction
|
||||
- Reports write to `cwd/.takt/runs/{slug}/reports/` (clone) to prevent agents from discovering the main repository path via instruction
|
||||
- Clones are ephemeral: created → task runs → auto-commit + push → deleted
|
||||
- Use `takt list` to manage task branches after clone deletion
|
||||
|
||||
|
||||
130
README.md
130
README.md
@ -4,7 +4,7 @@
|
||||
|
||||
**T**ask **A**gent **K**oordination **T**ool - Define how AI agents coordinate, where humans intervene, and what gets recorded — in YAML
|
||||
|
||||
TAKT runs multiple AI agents (Claude Code, Codex) through YAML-defined workflows. Each step — who runs, what they see, what's allowed, what happens on failure — is declared in a piece file, not left to the agent.
|
||||
TAKT runs multiple AI agents (Claude Code, Codex, OpenCode) through YAML-defined workflows. Each step — who runs, what they see, what's allowed, what happens on failure — is declared in a piece file, not left to the agent.
|
||||
|
||||
TAKT is built with TAKT itself (dogfooding).
|
||||
|
||||
@ -49,14 +49,14 @@ Personas, policies, and knowledge are managed as independent files and freely co
|
||||
|
||||
Choose one:
|
||||
|
||||
- **Use provider CLIs**: [Claude Code](https://docs.anthropic.com/en/docs/claude-code) or [Codex](https://github.com/openai/codex) installed
|
||||
- **Use direct API**: **Anthropic API Key** or **OpenAI API Key** (no CLI required)
|
||||
- **Use provider CLIs**: [Claude Code](https://docs.anthropic.com/en/docs/claude-code), [Codex](https://github.com/openai/codex), or [OpenCode](https://opencode.ai) installed
|
||||
- **Use direct API**: **Anthropic API Key**, **OpenAI API Key**, or **OpenCode API Key** (no CLI required)
|
||||
|
||||
Additionally required:
|
||||
|
||||
- [GitHub CLI](https://cli.github.com/) (`gh`) — Only needed for `takt #N` (GitHub Issue execution)
|
||||
|
||||
**Pricing Note**: When using API Keys, TAKT directly calls the Claude API (Anthropic) or OpenAI API. The pricing structure is the same as using Claude Code or Codex. Be mindful of costs, especially when running automated tasks in CI/CD environments, as API usage can accumulate.
|
||||
**Pricing Note**: When using API Keys, TAKT directly calls the Claude API (Anthropic), OpenAI API, or OpenCode API. The pricing structure is the same as using the respective CLI tools. Be mindful of costs, especially when running automated tasks in CI/CD environments, as API usage can accumulate.
|
||||
|
||||
## Installation
|
||||
|
||||
@ -186,7 +186,7 @@ takt #6 --auto-pr
|
||||
|
||||
### Task Management (add / run / watch / list)
|
||||
|
||||
Batch processing using task files (`.takt/tasks/`). Useful for accumulating multiple tasks and executing them together later.
|
||||
Batch processing using `.takt/tasks.yaml` with task directories under `.takt/tasks/{slug}/`. Useful for accumulating multiple tasks and executing them together later.
|
||||
|
||||
#### Add Task (`takt add`)
|
||||
|
||||
@ -201,14 +201,14 @@ takt add #28
|
||||
#### Execute Tasks (`takt run`)
|
||||
|
||||
```bash
|
||||
# Execute all pending tasks in .takt/tasks/
|
||||
# Execute all pending tasks in .takt/tasks.yaml
|
||||
takt run
|
||||
```
|
||||
|
||||
#### Watch Tasks (`takt watch`)
|
||||
|
||||
```bash
|
||||
# Monitor .takt/tasks/ and auto-execute tasks (resident process)
|
||||
# Monitor .takt/tasks.yaml and auto-execute tasks (resident process)
|
||||
takt watch
|
||||
```
|
||||
|
||||
@ -225,6 +225,13 @@ takt list --non-interactive --action delete --branch takt/my-branch --yes
|
||||
takt list --non-interactive --format json
|
||||
```
|
||||
|
||||
#### Task Directory Workflow (Create / Run / Verify)
|
||||
|
||||
1. Run `takt add` and confirm a pending record is created in `.takt/tasks.yaml`.
|
||||
2. Open the generated `.takt/tasks/{slug}/order.md` and add detailed specifications/references as needed.
|
||||
3. Run `takt run` (or `takt watch`) to execute pending tasks from `tasks.yaml`.
|
||||
4. Verify outputs in `.takt/runs/{slug}/reports/` using the same slug as `task_dir`.
|
||||
|
||||
### Pipeline Mode (for CI/Automation)
|
||||
|
||||
Specifying `--pipeline` enables non-interactive pipeline mode. Automatically creates branch → runs piece → commits & pushes. Suitable for CI/CD automation.
|
||||
@ -315,7 +322,7 @@ takt reset categories
|
||||
| `--repo <owner/repo>` | Specify repository (for PR creation) |
|
||||
| `--create-worktree <yes\|no>` | Skip worktree confirmation prompt |
|
||||
| `-q, --quiet` | Minimal output mode: suppress AI output (for CI) |
|
||||
| `--provider <name>` | Override agent provider (claude\|codex\|mock) |
|
||||
| `--provider <name>` | Override agent provider (claude\|codex\|opencode\|mock) |
|
||||
| `--model <name>` | Override agent model |
|
||||
|
||||
## Pieces
|
||||
@ -328,7 +335,7 @@ TAKT uses YAML-based piece definitions and rule-based routing. Builtin pieces ar
|
||||
|
||||
```yaml
|
||||
name: default
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
initial_movement: plan
|
||||
|
||||
# Section maps — key: file path (relative to this YAML)
|
||||
@ -466,6 +473,7 @@ TAKT includes multiple builtin pieces:
|
||||
| `structural-reform` | Full project review and structural reform: iterative codebase restructuring with staged file splits. |
|
||||
| `unit-test` | Unit test focused piece: test analysis → test implementation → review → fix. |
|
||||
| `e2e-test` | E2E test focused piece: E2E analysis → E2E implementation → review → fix (Vitest-based E2E flow). |
|
||||
| `frontend` | Frontend-specialized development piece with React/Next.js focused reviews and knowledge injection. |
|
||||
|
||||
**Per-persona provider overrides:** Use `persona_providers` in config to route specific personas to different providers (e.g., coder on Codex, reviewers on Claude) without duplicating pieces.
|
||||
|
||||
@ -532,14 +540,14 @@ The model string is passed to the Codex SDK. If unspecified, defaults to `codex`
|
||||
|
||||
.takt/ # Project-level configuration
|
||||
├── config.yaml # Project config (current piece, etc.)
|
||||
├── tasks/ # Pending task files (.yaml, .md)
|
||||
├── completed/ # Completed tasks and reports
|
||||
├── reports/ # Execution reports (auto-generated)
|
||||
│ └── {timestamp}-{slug}/
|
||||
└── logs/ # NDJSON format session logs
|
||||
├── latest.json # Pointer to current/latest session
|
||||
├── previous.json # Pointer to previous session
|
||||
└── {sessionId}.jsonl # NDJSON session log per piece execution
|
||||
├── tasks/ # Task input directories (.takt/tasks/{slug}/order.md, etc.)
|
||||
├── tasks.yaml # Pending tasks metadata (task_dir, piece, worktree, etc.)
|
||||
└── runs/ # Run-scoped artifacts
|
||||
└── {slug}/
|
||||
├── reports/ # Execution reports (auto-generated)
|
||||
├── context/ # knowledge/policy/previous_response snapshots
|
||||
├── logs/ # NDJSON session logs for this run
|
||||
└── meta.json # Run metadata
|
||||
```
|
||||
|
||||
Builtin resources are embedded in the npm package (`builtins/`). User files in `~/.takt/` take priority.
|
||||
@ -553,11 +561,17 @@ Configure default provider and model in `~/.takt/config.yaml`:
|
||||
language: en
|
||||
default_piece: default
|
||||
log_level: info
|
||||
provider: claude # Default provider: claude or codex
|
||||
provider: claude # Default provider: claude, codex, or opencode
|
||||
model: sonnet # Default model (optional)
|
||||
branch_name_strategy: romaji # Branch name generation: 'romaji' (fast) or 'ai' (slow)
|
||||
prevent_sleep: false # Prevent macOS idle sleep during execution (caffeinate)
|
||||
notification_sound: true # Enable/disable notification sounds
|
||||
notification_sound_events: # Optional per-event toggles
|
||||
iteration_limit: false
|
||||
piece_complete: true
|
||||
piece_abort: true
|
||||
run_complete: true # Enabled by default; set false to disable
|
||||
run_abort: true # Enabled by default; set false to disable
|
||||
concurrency: 1 # Parallel task count for takt run (1-10, default: 1 = sequential)
|
||||
task_poll_interval_ms: 500 # Polling interval for new tasks during takt run (100-5000, default: 500)
|
||||
interactive_preview_movements: 3 # Movement previews in interactive mode (0-10, default: 3)
|
||||
@ -569,9 +583,10 @@ interactive_preview_movements: 3 # Movement previews in interactive mode (0-10,
|
||||
# ai-antipattern-reviewer: claude # Keep reviewers on Claude
|
||||
|
||||
# API Key configuration (optional)
|
||||
# Can be overridden by environment variables TAKT_ANTHROPIC_API_KEY / TAKT_OPENAI_API_KEY
|
||||
# Can be overridden by environment variables TAKT_ANTHROPIC_API_KEY / TAKT_OPENAI_API_KEY / TAKT_OPENCODE_API_KEY
|
||||
anthropic_api_key: sk-ant-... # For Claude (Anthropic)
|
||||
# openai_api_key: sk-... # For Codex (OpenAI)
|
||||
# opencode_api_key: ... # For OpenCode
|
||||
|
||||
# Builtin piece filtering (optional)
|
||||
# builtin_pieces_enabled: true # Set false to disable all builtins
|
||||
@ -595,17 +610,17 @@ anthropic_api_key: sk-ant-... # For Claude (Anthropic)
|
||||
1. **Set via environment variables**:
|
||||
```bash
|
||||
export TAKT_ANTHROPIC_API_KEY=sk-ant-... # For Claude
|
||||
# or
|
||||
export TAKT_OPENAI_API_KEY=sk-... # For Codex
|
||||
export TAKT_OPENCODE_API_KEY=... # For OpenCode
|
||||
```
|
||||
|
||||
2. **Set in config file**:
|
||||
Write `anthropic_api_key` or `openai_api_key` in `~/.takt/config.yaml` as shown above
|
||||
Write `anthropic_api_key`, `openai_api_key`, or `opencode_api_key` in `~/.takt/config.yaml` as shown above
|
||||
|
||||
Priority: Environment variables > `config.yaml` settings
|
||||
|
||||
**Notes:**
|
||||
- If you set an API Key, installing Claude Code or Codex is not necessary. TAKT directly calls the Anthropic API or OpenAI API.
|
||||
- If you set an API Key, installing Claude Code, Codex, or OpenCode is not necessary. TAKT directly calls the respective API.
|
||||
- **Security**: If you write API Keys in `config.yaml`, be careful not to commit this file to Git. Consider using environment variables or adding `~/.takt/config.yaml` to `.gitignore`.
|
||||
|
||||
**Pipeline Template Variables:**
|
||||
@ -621,36 +636,43 @@ Priority: Environment variables > `config.yaml` settings
|
||||
1. Piece movement `model` (highest priority)
|
||||
2. Custom agent `model`
|
||||
3. Global config `model`
|
||||
4. Provider default (Claude: sonnet, Codex: codex)
|
||||
4. Provider default (Claude: sonnet, Codex: codex, OpenCode: provider default)
|
||||
|
||||
## Detailed Guides
|
||||
|
||||
### Task File Formats
|
||||
### Task Directory Format
|
||||
|
||||
TAKT supports batch processing with task files in `.takt/tasks/`. Both `.yaml`/`.yml` and `.md` file formats are supported.
|
||||
TAKT stores task metadata in `.takt/tasks.yaml`, and each task's long specification in `.takt/tasks/{slug}/`.
|
||||
|
||||
**YAML format** (recommended, supports worktree/branch/piece options):
|
||||
**Recommended layout**:
|
||||
|
||||
```text
|
||||
.takt/
|
||||
tasks/
|
||||
20260201-015714-foptng/
|
||||
order.md
|
||||
schema.sql
|
||||
wireframe.png
|
||||
tasks.yaml
|
||||
runs/
|
||||
20260201-015714-foptng/
|
||||
reports/
|
||||
```
|
||||
|
||||
**tasks.yaml record**:
|
||||
|
||||
```yaml
|
||||
# .takt/tasks/add-auth.yaml
|
||||
task: "Add authentication feature"
|
||||
worktree: true # Execute in isolated shared clone
|
||||
branch: "feat/add-auth" # Branch name (auto-generated if omitted)
|
||||
piece: "default" # Piece specification (uses current if omitted)
|
||||
tasks:
|
||||
- name: add-auth-feature
|
||||
status: pending
|
||||
task_dir: .takt/tasks/20260201-015714-foptng
|
||||
piece: default
|
||||
created_at: "2026-02-01T01:57:14.000Z"
|
||||
started_at: null
|
||||
completed_at: null
|
||||
```
|
||||
|
||||
**Markdown format** (simple, backward compatible):
|
||||
|
||||
```markdown
|
||||
# .takt/tasks/add-login-feature.md
|
||||
|
||||
Add login feature to the application.
|
||||
|
||||
Requirements:
|
||||
- Username and password fields
|
||||
- Form validation
|
||||
- Error handling on failure
|
||||
```
|
||||
`takt add` creates `.takt/tasks/{slug}/order.md` automatically and saves `task_dir` to `tasks.yaml`.
|
||||
|
||||
#### Isolated Execution with Shared Clone
|
||||
|
||||
@ -667,15 +689,14 @@ Clones are ephemeral. After task completion, they auto-commit + push, then delet
|
||||
|
||||
### Session Logs
|
||||
|
||||
TAKT writes session logs in NDJSON (`.jsonl`) format to `.takt/logs/`. Each record is atomically appended, so partial logs are preserved even if the process crashes, and you can track in real-time with `tail -f`.
|
||||
TAKT writes session logs in NDJSON (`.jsonl`) format to `.takt/runs/{slug}/logs/`. Each record is atomically appended, so partial logs are preserved even if the process crashes, and you can track in real-time with `tail -f`.
|
||||
|
||||
- `.takt/logs/latest.json` - Pointer to current (or latest) session
|
||||
- `.takt/logs/previous.json` - Pointer to previous session
|
||||
- `.takt/logs/{sessionId}.jsonl` - NDJSON session log per piece execution
|
||||
- `.takt/runs/{slug}/logs/{sessionId}.jsonl` - NDJSON session log per piece execution
|
||||
- `.takt/runs/{slug}/meta.json` - Run metadata (`task`, `piece`, `start/end`, `status`, etc.)
|
||||
|
||||
Record types: `piece_start`, `step_start`, `step_complete`, `piece_complete`, `piece_abort`
|
||||
|
||||
Agents can read `previous.json` to inherit context from the previous execution. Session continuation is automatic — just run `takt "task"` to continue from the previous session.
|
||||
The latest previous response is stored at `.takt/runs/{slug}/context/previous_responses/latest.md` and inherited automatically.
|
||||
|
||||
### Adding Custom Pieces
|
||||
|
||||
@ -690,7 +711,7 @@ takt eject default
|
||||
# ~/.takt/pieces/my-piece.yaml
|
||||
name: my-piece
|
||||
description: Custom piece
|
||||
max_iterations: 5
|
||||
max_movements: 5
|
||||
initial_movement: analyze
|
||||
|
||||
personas:
|
||||
@ -740,11 +761,11 @@ Variables available in `instruction_template`:
|
||||
|----------|-------------|
|
||||
| `{task}` | Original user request (auto-injected if not in template) |
|
||||
| `{iteration}` | Piece-wide turn count (total steps executed) |
|
||||
| `{max_iterations}` | Maximum iteration count |
|
||||
| `{max_movements}` | Maximum iteration count |
|
||||
| `{movement_iteration}` | Per-movement iteration count (times this movement has been executed) |
|
||||
| `{previous_response}` | Output from previous movement (auto-injected if not in template) |
|
||||
| `{user_inputs}` | Additional user inputs during piece (auto-injected if not in template) |
|
||||
| `{report_dir}` | Report directory path (e.g., `.takt/reports/20250126-143052-task-summary`) |
|
||||
| `{report_dir}` | Report directory path (e.g., `.takt/runs/20250126-143052-task-summary/reports`) |
|
||||
| `{report:filename}` | Expands to `{report_dir}/filename` (e.g., `{report:00-plan.md}`) |
|
||||
|
||||
### Piece Design
|
||||
@ -777,7 +798,7 @@ Special `next` values: `COMPLETE` (success), `ABORT` (failure)
|
||||
| `edit` | - | Whether movement can edit project files (`true`/`false`) |
|
||||
| `pass_previous_response` | `true` | Pass previous movement output to `{previous_response}` |
|
||||
| `allowed_tools` | - | List of tools agent can use (Read, Glob, Grep, Edit, Write, Bash, etc.) |
|
||||
| `provider` | - | Override provider for this movement (`claude` or `codex`) |
|
||||
| `provider` | - | Override provider for this movement (`claude`, `codex`, or `opencode`) |
|
||||
| `model` | - | Override model for this movement |
|
||||
| `permission_mode` | - | Permission mode: `readonly`, `edit`, `full` (provider-independent) |
|
||||
| `output_contracts` | - | Output contract definitions for report files |
|
||||
@ -855,7 +876,7 @@ npm install -g takt
|
||||
takt --pipeline --task "Fix bug" --auto-pr --repo owner/repo
|
||||
```
|
||||
|
||||
For authentication, set `TAKT_ANTHROPIC_API_KEY` or `TAKT_OPENAI_API_KEY` environment variables (TAKT-specific prefix).
|
||||
For authentication, set `TAKT_ANTHROPIC_API_KEY`, `TAKT_OPENAI_API_KEY`, or `TAKT_OPENCODE_API_KEY` environment variables (TAKT-specific prefix).
|
||||
|
||||
```bash
|
||||
# For Claude (Anthropic)
|
||||
@ -863,6 +884,9 @@ export TAKT_ANTHROPIC_API_KEY=sk-ant-...
|
||||
|
||||
# For Codex (OpenAI)
|
||||
export TAKT_OPENAI_API_KEY=sk-...
|
||||
|
||||
# For OpenCode
|
||||
export TAKT_OPENCODE_API_KEY=...
|
||||
```
|
||||
|
||||
## Documentation
|
||||
|
||||
@ -6,6 +6,18 @@ piece_categories:
|
||||
- coding
|
||||
- minimal
|
||||
- compound-eye
|
||||
🎨 Frontend:
|
||||
pieces:
|
||||
- frontend
|
||||
⚙️ Backend: {}
|
||||
🔧 Expert:
|
||||
Full Stack:
|
||||
pieces:
|
||||
- expert
|
||||
- expert-cqrs
|
||||
🛠️ Refactoring:
|
||||
pieces:
|
||||
- structural-reform
|
||||
🔍 Review:
|
||||
pieces:
|
||||
- review-fix-minimal
|
||||
@ -14,16 +26,6 @@ piece_categories:
|
||||
pieces:
|
||||
- unit-test
|
||||
- e2e-test
|
||||
🎨 Frontend: {}
|
||||
⚙️ Backend: {}
|
||||
🔧 Expert:
|
||||
Full Stack:
|
||||
pieces:
|
||||
- expert
|
||||
- expert-cqrs
|
||||
Refactoring:
|
||||
pieces:
|
||||
- structural-reform
|
||||
Others:
|
||||
pieces:
|
||||
- research
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: coding
|
||||
description: Lightweight development piece with planning and parallel reviews (plan -> implement -> parallel review -> complete)
|
||||
max_iterations: 20
|
||||
max_movements: 20
|
||||
initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: compound-eye
|
||||
description: Multi-model review - send the same instruction to Claude and Codex simultaneously, synthesize both responses
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
initial_movement: evaluate
|
||||
movements:
|
||||
- name: evaluate
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: default
|
||||
description: Standard development piece with planning and specialized reviews
|
||||
max_iterations: 30
|
||||
max_movements: 30
|
||||
initial_movement: plan
|
||||
loop_monitors:
|
||||
- cycle:
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: e2e-test
|
||||
description: E2E test focused piece (E2E analysis → E2E implementation → review → fix)
|
||||
max_iterations: 20
|
||||
max_movements: 20
|
||||
initial_movement: plan_test
|
||||
loop_monitors:
|
||||
- cycle:
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: expert-cqrs
|
||||
description: CQRS+ES, Frontend, Security, QA Expert Review
|
||||
max_iterations: 30
|
||||
max_movements: 30
|
||||
initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
@ -26,7 +26,6 @@ movements:
|
||||
- name: implement
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
@ -87,7 +86,6 @@ movements:
|
||||
- name: ai_fix
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
@ -218,7 +216,6 @@ movements:
|
||||
- name: fix
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
@ -267,7 +264,6 @@ movements:
|
||||
- name: fix_supervisor
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: expert
|
||||
description: Architecture, Frontend, Security, QA Expert Review
|
||||
max_iterations: 30
|
||||
max_movements: 30
|
||||
initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
@ -26,7 +26,6 @@ movements:
|
||||
- name: implement
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
@ -86,7 +85,6 @@ movements:
|
||||
- name: ai_fix
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
@ -216,7 +214,6 @@ movements:
|
||||
- name: fix
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
@ -264,7 +261,6 @@ movements:
|
||||
- name: fix_supervisor
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
|
||||
282
builtins/en/pieces/frontend.yaml
Normal file
282
builtins/en/pieces/frontend.yaml
Normal file
@ -0,0 +1,282 @@
|
||||
name: frontend
|
||||
description: Frontend, Security, QA Expert Review
|
||||
max_movements: 30
|
||||
initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
edit: false
|
||||
persona: planner
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- Bash
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
instruction: plan
|
||||
rules:
|
||||
- condition: Task analysis and planning is complete
|
||||
next: implement
|
||||
- condition: Requirements are unclear and planning cannot proceed
|
||||
next: ABORT
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 00-plan.md
|
||||
format: plan
|
||||
- name: implement
|
||||
edit: true
|
||||
persona: coder
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
session: refresh
|
||||
knowledge:
|
||||
- frontend
|
||||
- security
|
||||
- architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- Edit
|
||||
- Write
|
||||
- Bash
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
instruction: implement
|
||||
rules:
|
||||
- condition: Implementation is complete
|
||||
next: ai_review
|
||||
- condition: No implementation (report only)
|
||||
next: ai_review
|
||||
- condition: Cannot proceed with implementation
|
||||
next: ai_review
|
||||
- condition: User input required
|
||||
next: implement
|
||||
requires_user_input: true
|
||||
interactive_only: true
|
||||
output_contracts:
|
||||
report:
|
||||
- Scope: 01-coder-scope.md
|
||||
- Decisions: 02-coder-decisions.md
|
||||
- name: ai_review
|
||||
edit: false
|
||||
persona: ai-antipattern-reviewer
|
||||
policy:
|
||||
- review
|
||||
- ai-antipattern
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
instruction: ai-review
|
||||
rules:
|
||||
- condition: No AI-specific issues found
|
||||
next: reviewers
|
||||
- condition: AI-specific issues detected
|
||||
next: ai_fix
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 03-ai-review.md
|
||||
format: ai-review
|
||||
- name: ai_fix
|
||||
edit: true
|
||||
persona: coder
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
session: refresh
|
||||
knowledge:
|
||||
- frontend
|
||||
- security
|
||||
- architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- Edit
|
||||
- Write
|
||||
- Bash
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
instruction: ai-fix
|
||||
rules:
|
||||
- condition: AI Reviewer's issues have been fixed
|
||||
next: ai_review
|
||||
- condition: No fix needed (verified target files/spec)
|
||||
next: ai_no_fix
|
||||
- condition: Unable to proceed with fixes
|
||||
next: ai_no_fix
|
||||
- name: ai_no_fix
|
||||
edit: false
|
||||
persona: architecture-reviewer
|
||||
policy: review
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
rules:
|
||||
- condition: ai_review's findings are valid (fix required)
|
||||
next: ai_fix
|
||||
- condition: ai_fix's judgment is valid (no fix needed)
|
||||
next: reviewers
|
||||
instruction: arbitrate
|
||||
- name: reviewers
|
||||
parallel:
|
||||
- name: arch-review
|
||||
edit: false
|
||||
persona: architecture-reviewer
|
||||
policy: review
|
||||
knowledge:
|
||||
- architecture
|
||||
- frontend
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: approved
|
||||
- condition: needs_fix
|
||||
instruction: review-arch
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 04-architect-review.md
|
||||
format: architecture-review
|
||||
- name: frontend-review
|
||||
edit: false
|
||||
persona: frontend-reviewer
|
||||
policy: review
|
||||
knowledge: frontend
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: approved
|
||||
- condition: needs_fix
|
||||
instruction: review-frontend
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 05-frontend-review.md
|
||||
format: frontend-review
|
||||
- name: security-review
|
||||
edit: false
|
||||
persona: security-reviewer
|
||||
policy: review
|
||||
knowledge: security
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: approved
|
||||
- condition: needs_fix
|
||||
instruction: review-security
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 06-security-review.md
|
||||
format: security-review
|
||||
- name: qa-review
|
||||
edit: false
|
||||
persona: qa-reviewer
|
||||
policy:
|
||||
- review
|
||||
- qa
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: approved
|
||||
- condition: needs_fix
|
||||
instruction: review-qa
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 07-qa-review.md
|
||||
format: qa-review
|
||||
rules:
|
||||
- condition: all("approved")
|
||||
next: supervise
|
||||
- condition: any("needs_fix")
|
||||
next: fix
|
||||
- name: fix
|
||||
edit: true
|
||||
persona: coder
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
knowledge:
|
||||
- frontend
|
||||
- security
|
||||
- architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- Edit
|
||||
- Write
|
||||
- Bash
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
permission_mode: edit
|
||||
rules:
|
||||
- condition: Fix complete
|
||||
next: reviewers
|
||||
- condition: Cannot proceed, insufficient info
|
||||
next: plan
|
||||
instruction: fix
|
||||
- name: supervise
|
||||
edit: false
|
||||
persona: expert-supervisor
|
||||
policy: review
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
instruction: supervise
|
||||
rules:
|
||||
- condition: All validations pass and ready to merge
|
||||
next: COMPLETE
|
||||
- condition: Issues detected during final review
|
||||
next: fix_supervisor
|
||||
output_contracts:
|
||||
report:
|
||||
- Validation: 08-supervisor-validation.md
|
||||
- Summary: summary.md
|
||||
- name: fix_supervisor
|
||||
edit: true
|
||||
persona: coder
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
knowledge:
|
||||
- frontend
|
||||
- security
|
||||
- architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- Edit
|
||||
- Write
|
||||
- Bash
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
instruction: fix-supervisor
|
||||
rules:
|
||||
- condition: Supervisor's issues have been fixed
|
||||
next: supervise
|
||||
- condition: Unable to proceed with fixes
|
||||
next: plan
|
||||
@ -1,6 +1,6 @@
|
||||
name: magi
|
||||
description: MAGI Deliberation System - Analyze from 3 perspectives and decide by majority
|
||||
max_iterations: 5
|
||||
max_movements: 5
|
||||
initial_movement: melchior
|
||||
movements:
|
||||
- name: melchior
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: minimal
|
||||
description: Minimal development piece (implement -> parallel review -> fix if needed -> complete)
|
||||
max_iterations: 20
|
||||
max_movements: 20
|
||||
initial_movement: implement
|
||||
movements:
|
||||
- name: implement
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: passthrough
|
||||
description: Single-agent thin wrapper. Pass task directly to coder as-is.
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
initial_movement: execute
|
||||
movements:
|
||||
- name: execute
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: research
|
||||
description: Research piece - autonomously executes research without asking questions
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
@ -13,7 +13,7 @@ movements:
|
||||
- WebFetch
|
||||
instruction_template: |
|
||||
## Piece Status
|
||||
- Iteration: {iteration}/{max_iterations} (piece-wide)
|
||||
- Iteration: {iteration}/{max_movements} (piece-wide)
|
||||
- Movement Iteration: {movement_iteration} (times this movement has run)
|
||||
- Movement: plan
|
||||
|
||||
@ -48,7 +48,7 @@ movements:
|
||||
- WebFetch
|
||||
instruction_template: |
|
||||
## Piece Status
|
||||
- Iteration: {iteration}/{max_iterations} (piece-wide)
|
||||
- Iteration: {iteration}/{max_movements} (piece-wide)
|
||||
- Movement Iteration: {movement_iteration} (times this movement has run)
|
||||
- Movement: dig
|
||||
|
||||
@ -88,7 +88,7 @@ movements:
|
||||
- WebFetch
|
||||
instruction_template: |
|
||||
## Piece Status
|
||||
- Iteration: {iteration}/{max_iterations} (piece-wide)
|
||||
- Iteration: {iteration}/{max_movements} (piece-wide)
|
||||
- Movement Iteration: {movement_iteration} (times this movement has run)
|
||||
- Movement: supervise (research quality evaluation)
|
||||
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: review-fix-minimal
|
||||
description: Review and fix piece for existing code (starts with review, no implementation)
|
||||
max_iterations: 20
|
||||
max_movements: 20
|
||||
initial_movement: reviewers
|
||||
movements:
|
||||
- name: implement
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: review-only
|
||||
description: Review-only piece - reviews code without making edits
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: structural-reform
|
||||
description: Full project review and structural reform - iterative codebase restructuring with staged file splits
|
||||
max_iterations: 50
|
||||
max_movements: 50
|
||||
initial_movement: review
|
||||
loop_monitors:
|
||||
- cycle:
|
||||
@ -44,7 +44,7 @@ movements:
|
||||
- WebFetch
|
||||
instruction_template: |
|
||||
## Piece Status
|
||||
- Iteration: {iteration}/{max_iterations} (piece-wide)
|
||||
- Iteration: {iteration}/{max_movements} (piece-wide)
|
||||
- Movement Iteration: {movement_iteration} (times this movement has run)
|
||||
- Movement: review (full project review)
|
||||
|
||||
@ -126,7 +126,7 @@ movements:
|
||||
- WebFetch
|
||||
instruction_template: |
|
||||
## Piece Status
|
||||
- Iteration: {iteration}/{max_iterations} (piece-wide)
|
||||
- Iteration: {iteration}/{max_movements} (piece-wide)
|
||||
- Movement Iteration: {movement_iteration} (times this movement has run)
|
||||
- Movement: plan_reform (reform plan creation)
|
||||
|
||||
@ -323,7 +323,7 @@ movements:
|
||||
- WebFetch
|
||||
instruction_template: |
|
||||
## Piece Status
|
||||
- Iteration: {iteration}/{max_iterations} (piece-wide)
|
||||
- Iteration: {iteration}/{max_movements} (piece-wide)
|
||||
- Movement Iteration: {movement_iteration} (times this movement has run)
|
||||
- Movement: verify (build and test verification)
|
||||
|
||||
@ -378,7 +378,7 @@ movements:
|
||||
- WebFetch
|
||||
instruction_template: |
|
||||
## Piece Status
|
||||
- Iteration: {iteration}/{max_iterations} (piece-wide)
|
||||
- Iteration: {iteration}/{max_movements} (piece-wide)
|
||||
- Movement Iteration: {movement_iteration} (times this movement has run)
|
||||
- Movement: next_target (progress check and next target selection)
|
||||
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: unit-test
|
||||
description: Unit test focused piece (test analysis → test implementation → review → fix)
|
||||
max_iterations: 20
|
||||
max_movements: 20
|
||||
initial_movement: plan_test
|
||||
loop_monitors:
|
||||
- cycle:
|
||||
|
||||
@ -82,9 +82,9 @@ InstructionBuilder が instruction_template 内の `{変数名}` を展開する
|
||||
| 変数 | 内容 |
|
||||
|------|------|
|
||||
| `{iteration}` | ピース全体のイテレーション数 |
|
||||
| `{max_iterations}` | 最大イテレーション数 |
|
||||
| `{max_movements}` | 最大イテレーション数 |
|
||||
| `{movement_iteration}` | ムーブメント単位のイテレーション数 |
|
||||
| `{report_dir}` | レポートディレクトリ名 |
|
||||
| `{report_dir}` | レポートディレクトリ名(`.takt/runs/{slug}/reports`) |
|
||||
| `{report:filename}` | 指定レポートの内容展開(ファイルが存在する場合) |
|
||||
| `{cycle_count}` | ループモニターで検出されたサイクル回数(`loop_monitors` 専用) |
|
||||
|
||||
@ -222,7 +222,7 @@ InstructionBuilder が instruction_template 内の `{変数名}` を展開する
|
||||
|
||||
# 非許容
|
||||
**参照するレポート:**
|
||||
- .takt/reports/20250101-task/ai-review.md ← パスのハードコード
|
||||
- .takt/runs/20250101-task/reports/ai-review.md ← パスのハードコード
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
@ -157,7 +157,7 @@
|
||||
|
||||
1. **ポリシーの詳細ルール**: コード例・判定基準・例外リスト等の詳細はポリシーの責務(1行の行動指針は行動姿勢に記載してよい)
|
||||
2. **ピース固有の概念**: ムーブメント名、レポートファイル名、ステップ間ルーティング
|
||||
3. **ツール固有の環境情報**: `.takt/reports/` 等のディレクトリパス、テンプレート変数(`{report_dir}` 等)
|
||||
3. **ツール固有の環境情報**: `.takt/runs/` 等のディレクトリパス、テンプレート変数(`{report_dir}` 等)
|
||||
4. **実行手順**: 「まず〜を読み、次に〜を実行」のような手順はinstruction_templateの責務
|
||||
|
||||
### 例外: ドメイン知識としての重複
|
||||
|
||||
@ -100,7 +100,7 @@
|
||||
|
||||
1. **特定エージェント固有の知識**: Architecture Reviewer だけが使う検出手法等
|
||||
2. **ピース固有の概念**: ムーブメント名、レポートファイル名
|
||||
3. **ツール固有のパス**: `.takt/reports/` 等の具体的なディレクトリパス
|
||||
3. **ツール固有のパス**: `.takt/runs/` 等の具体的なディレクトリパス
|
||||
4. **実行手順**: どのファイルを読め、何を実行しろ等
|
||||
|
||||
---
|
||||
|
||||
@ -6,6 +6,18 @@ piece_categories:
|
||||
- coding
|
||||
- minimal
|
||||
- compound-eye
|
||||
🎨 フロントエンド:
|
||||
pieces:
|
||||
- frontend
|
||||
⚙️ バックエンド: {}
|
||||
🔧 エキスパート:
|
||||
フルスタック:
|
||||
pieces:
|
||||
- expert
|
||||
- expert-cqrs
|
||||
🛠️ リファクタリング:
|
||||
pieces:
|
||||
- structural-reform
|
||||
🔍 レビュー:
|
||||
pieces:
|
||||
- review-fix-minimal
|
||||
@ -14,16 +26,6 @@ piece_categories:
|
||||
pieces:
|
||||
- unit-test
|
||||
- e2e-test
|
||||
🎨 フロントエンド: {}
|
||||
⚙️ バックエンド: {}
|
||||
🔧 エキスパート:
|
||||
フルスタック:
|
||||
pieces:
|
||||
- expert
|
||||
- expert-cqrs
|
||||
リファクタリング:
|
||||
pieces:
|
||||
- structural-reform
|
||||
その他:
|
||||
pieces:
|
||||
- research
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: coding
|
||||
description: Lightweight development piece with planning and parallel reviews (plan -> implement -> parallel review -> complete)
|
||||
max_iterations: 20
|
||||
max_movements: 20
|
||||
initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: compound-eye
|
||||
description: 複眼レビュー - 同じ指示を Claude と Codex に同時に投げ、両者の回答を統合する
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
initial_movement: evaluate
|
||||
|
||||
movements:
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: default
|
||||
description: Standard development piece with planning and specialized reviews
|
||||
max_iterations: 30
|
||||
max_movements: 30
|
||||
initial_movement: plan
|
||||
loop_monitors:
|
||||
- cycle:
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: e2e-test
|
||||
description: E2Eテスト追加に特化したピース(E2E分析→E2E実装→レビュー→修正)
|
||||
max_iterations: 20
|
||||
max_movements: 20
|
||||
initial_movement: plan_test
|
||||
loop_monitors:
|
||||
- cycle:
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: expert-cqrs
|
||||
description: CQRS+ES・フロントエンド・セキュリティ・QA専門家レビュー
|
||||
max_iterations: 30
|
||||
max_movements: 30
|
||||
initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
@ -26,7 +26,6 @@ movements:
|
||||
- name: implement
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
@ -87,7 +86,6 @@ movements:
|
||||
- name: ai_fix
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
@ -218,7 +216,6 @@ movements:
|
||||
- name: fix
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
@ -267,7 +264,6 @@ movements:
|
||||
- name: fix_supervisor
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: expert
|
||||
description: アーキテクチャ・フロントエンド・セキュリティ・QA専門家レビュー
|
||||
max_iterations: 30
|
||||
max_movements: 30
|
||||
initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
@ -26,7 +26,6 @@ movements:
|
||||
- name: implement
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
@ -86,7 +85,6 @@ movements:
|
||||
- name: ai_fix
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
@ -216,7 +214,6 @@ movements:
|
||||
- name: fix
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
@ -264,7 +261,6 @@ movements:
|
||||
- name: fix_supervisor
|
||||
edit: true
|
||||
persona: coder
|
||||
pass_previous_response: false
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
|
||||
282
builtins/ja/pieces/frontend.yaml
Normal file
282
builtins/ja/pieces/frontend.yaml
Normal file
@ -0,0 +1,282 @@
|
||||
name: frontend
|
||||
description: フロントエンド・セキュリティ・QA専門家レビュー
|
||||
max_movements: 30
|
||||
initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
edit: false
|
||||
persona: planner
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- Bash
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
instruction: plan
|
||||
rules:
|
||||
- condition: タスク分析と計画が完了した
|
||||
next: implement
|
||||
- condition: 要件が不明確で計画を立てられない
|
||||
next: ABORT
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 00-plan.md
|
||||
format: plan
|
||||
- name: implement
|
||||
edit: true
|
||||
persona: coder
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
session: refresh
|
||||
knowledge:
|
||||
- frontend
|
||||
- security
|
||||
- architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- Edit
|
||||
- Write
|
||||
- Bash
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
instruction: implement
|
||||
rules:
|
||||
- condition: 実装が完了した
|
||||
next: ai_review
|
||||
- condition: 実装未着手(レポートのみ)
|
||||
next: ai_review
|
||||
- condition: 実装を進行できない
|
||||
next: ai_review
|
||||
- condition: ユーザー入力が必要
|
||||
next: implement
|
||||
requires_user_input: true
|
||||
interactive_only: true
|
||||
output_contracts:
|
||||
report:
|
||||
- Scope: 01-coder-scope.md
|
||||
- Decisions: 02-coder-decisions.md
|
||||
- name: ai_review
|
||||
edit: false
|
||||
persona: ai-antipattern-reviewer
|
||||
policy:
|
||||
- review
|
||||
- ai-antipattern
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
instruction: ai-review
|
||||
rules:
|
||||
- condition: AI特有の問題が見つからない
|
||||
next: reviewers
|
||||
- condition: AI特有の問題が検出された
|
||||
next: ai_fix
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 03-ai-review.md
|
||||
format: ai-review
|
||||
- name: ai_fix
|
||||
edit: true
|
||||
persona: coder
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
session: refresh
|
||||
knowledge:
|
||||
- frontend
|
||||
- security
|
||||
- architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- Edit
|
||||
- Write
|
||||
- Bash
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
instruction: ai-fix
|
||||
rules:
|
||||
- condition: AI Reviewerの指摘に対する修正が完了した
|
||||
next: ai_review
|
||||
- condition: 修正不要(指摘対象ファイル/仕様の確認済み)
|
||||
next: ai_no_fix
|
||||
- condition: 修正を進行できない
|
||||
next: ai_no_fix
|
||||
- name: ai_no_fix
|
||||
edit: false
|
||||
persona: architecture-reviewer
|
||||
policy: review
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
rules:
|
||||
- condition: ai_reviewの指摘が妥当(修正すべき)
|
||||
next: ai_fix
|
||||
- condition: ai_fixの判断が妥当(修正不要)
|
||||
next: reviewers
|
||||
instruction: arbitrate
|
||||
- name: reviewers
|
||||
parallel:
|
||||
- name: arch-review
|
||||
edit: false
|
||||
persona: architecture-reviewer
|
||||
policy: review
|
||||
knowledge:
|
||||
- architecture
|
||||
- frontend
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: approved
|
||||
- condition: needs_fix
|
||||
instruction: review-arch
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 04-architect-review.md
|
||||
format: architecture-review
|
||||
- name: frontend-review
|
||||
edit: false
|
||||
persona: frontend-reviewer
|
||||
policy: review
|
||||
knowledge: frontend
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: approved
|
||||
- condition: needs_fix
|
||||
instruction: review-frontend
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 05-frontend-review.md
|
||||
format: frontend-review
|
||||
- name: security-review
|
||||
edit: false
|
||||
persona: security-reviewer
|
||||
policy: review
|
||||
knowledge: security
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: approved
|
||||
- condition: needs_fix
|
||||
instruction: review-security
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 06-security-review.md
|
||||
format: security-review
|
||||
- name: qa-review
|
||||
edit: false
|
||||
persona: qa-reviewer
|
||||
policy:
|
||||
- review
|
||||
- qa
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: approved
|
||||
- condition: needs_fix
|
||||
instruction: review-qa
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 07-qa-review.md
|
||||
format: qa-review
|
||||
rules:
|
||||
- condition: all("approved")
|
||||
next: supervise
|
||||
- condition: any("needs_fix")
|
||||
next: fix
|
||||
- name: fix
|
||||
edit: true
|
||||
persona: coder
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
knowledge:
|
||||
- frontend
|
||||
- security
|
||||
- architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- Edit
|
||||
- Write
|
||||
- Bash
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
permission_mode: edit
|
||||
rules:
|
||||
- condition: 修正が完了した
|
||||
next: reviewers
|
||||
- condition: 修正を進行できない
|
||||
next: plan
|
||||
instruction: fix
|
||||
- name: supervise
|
||||
edit: false
|
||||
persona: expert-supervisor
|
||||
policy: review
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
instruction: supervise
|
||||
rules:
|
||||
- condition: すべての検証が完了し、マージ可能な状態である
|
||||
next: COMPLETE
|
||||
- condition: 問題が検出された
|
||||
next: fix_supervisor
|
||||
output_contracts:
|
||||
report:
|
||||
- Validation: 08-supervisor-validation.md
|
||||
- Summary: summary.md
|
||||
- name: fix_supervisor
|
||||
edit: true
|
||||
persona: coder
|
||||
policy:
|
||||
- coding
|
||||
- testing
|
||||
knowledge:
|
||||
- frontend
|
||||
- security
|
||||
- architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- Edit
|
||||
- Write
|
||||
- Bash
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
instruction: fix-supervisor
|
||||
rules:
|
||||
- condition: 監督者の指摘に対する修正が完了した
|
||||
next: supervise
|
||||
- condition: 修正を進行できない
|
||||
next: plan
|
||||
@ -1,6 +1,6 @@
|
||||
name: magi
|
||||
description: MAGI合議システム - 3つの観点から分析し多数決で判定
|
||||
max_iterations: 5
|
||||
max_movements: 5
|
||||
initial_movement: melchior
|
||||
movements:
|
||||
- name: melchior
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: minimal
|
||||
description: Minimal development piece (implement -> parallel review -> fix if needed -> complete)
|
||||
max_iterations: 20
|
||||
max_movements: 20
|
||||
initial_movement: implement
|
||||
movements:
|
||||
- name: implement
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: passthrough
|
||||
description: Single-agent thin wrapper. Pass task directly to coder as-is.
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
initial_movement: execute
|
||||
movements:
|
||||
- name: execute
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: research
|
||||
description: 調査ピース - 質問せずに自律的に調査を実行
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
@ -13,7 +13,7 @@ movements:
|
||||
- WebFetch
|
||||
instruction_template: |
|
||||
## ピース状況
|
||||
- イテレーション: {iteration}/{max_iterations}(ピース全体)
|
||||
- イテレーション: {iteration}/{max_movements}(ピース全体)
|
||||
- ムーブメント実行回数: {movement_iteration}(このムーブメントの実行回数)
|
||||
- ムーブメント: plan
|
||||
|
||||
@ -48,7 +48,7 @@ movements:
|
||||
- WebFetch
|
||||
instruction_template: |
|
||||
## ピース状況
|
||||
- イテレーション: {iteration}/{max_iterations}(ピース全体)
|
||||
- イテレーション: {iteration}/{max_movements}(ピース全体)
|
||||
- ムーブメント実行回数: {movement_iteration}(このムーブメントの実行回数)
|
||||
- ムーブメント: dig
|
||||
|
||||
@ -88,7 +88,7 @@ movements:
|
||||
- WebFetch
|
||||
instruction_template: |
|
||||
## ピース状況
|
||||
- イテレーション: {iteration}/{max_iterations}(ピース全体)
|
||||
- イテレーション: {iteration}/{max_movements}(ピース全体)
|
||||
- ムーブメント実行回数: {movement_iteration}(このムーブメントの実行回数)
|
||||
- ムーブメント: supervise (調査品質評価)
|
||||
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: review-fix-minimal
|
||||
description: 既存コードのレビューと修正ピース(レビュー開始、実装なし)
|
||||
max_iterations: 20
|
||||
max_movements: 20
|
||||
initial_movement: reviewers
|
||||
movements:
|
||||
- name: implement
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: review-only
|
||||
description: レビュー専用ピース - コードをレビューするだけで編集は行わない
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: structural-reform
|
||||
description: プロジェクト全体レビューと構造改革 - 段階的なファイル分割による反復的コードベース再構築
|
||||
max_iterations: 50
|
||||
max_movements: 50
|
||||
initial_movement: review
|
||||
loop_monitors:
|
||||
- cycle:
|
||||
@ -44,7 +44,7 @@ movements:
|
||||
- WebFetch
|
||||
instruction_template: |
|
||||
## ピースステータス
|
||||
- イテレーション: {iteration}/{max_iterations}(ピース全体)
|
||||
- イテレーション: {iteration}/{max_movements}(ピース全体)
|
||||
- ムーブメントイテレーション: {movement_iteration}(このムーブメントの実行回数)
|
||||
- ムーブメント: review(プロジェクト全体レビュー)
|
||||
|
||||
@ -126,7 +126,7 @@ movements:
|
||||
- WebFetch
|
||||
instruction_template: |
|
||||
## ピースステータス
|
||||
- イテレーション: {iteration}/{max_iterations}(ピース全体)
|
||||
- イテレーション: {iteration}/{max_movements}(ピース全体)
|
||||
- ムーブメントイテレーション: {movement_iteration}(このムーブメントの実行回数)
|
||||
- ムーブメント: plan_reform(改革計画策定)
|
||||
|
||||
@ -323,7 +323,7 @@ movements:
|
||||
- WebFetch
|
||||
instruction_template: |
|
||||
## ピースステータス
|
||||
- イテレーション: {iteration}/{max_iterations}(ピース全体)
|
||||
- イテレーション: {iteration}/{max_movements}(ピース全体)
|
||||
- ムーブメントイテレーション: {movement_iteration}(このムーブメントの実行回数)
|
||||
- ムーブメント: verify(ビルド・テスト検証)
|
||||
|
||||
@ -378,7 +378,7 @@ movements:
|
||||
- WebFetch
|
||||
instruction_template: |
|
||||
## ピースステータス
|
||||
- イテレーション: {iteration}/{max_iterations}(ピース全体)
|
||||
- イテレーション: {iteration}/{max_movements}(ピース全体)
|
||||
- ムーブメントイテレーション: {movement_iteration}(このムーブメントの実行回数)
|
||||
- ムーブメント: next_target(進捗確認と次ターゲット選択)
|
||||
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
name: unit-test
|
||||
description: 単体テスト追加に特化したピース(テスト分析→テスト実装→レビュー→修正)
|
||||
max_iterations: 20
|
||||
max_movements: 20
|
||||
initial_movement: plan_test
|
||||
loop_monitors:
|
||||
- cycle:
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# Temporary files
|
||||
logs/
|
||||
reports/
|
||||
runs/
|
||||
completed/
|
||||
tasks/
|
||||
worktrees/
|
||||
|
||||
@ -1,37 +1,48 @@
|
||||
TAKT Task File Format
|
||||
=====================
|
||||
TAKT Task Directory Format
|
||||
==========================
|
||||
|
||||
Tasks placed in this directory (.takt/tasks/) will be processed by TAKT.
|
||||
`.takt/tasks/` is the task input directory. Each task uses one subdirectory.
|
||||
|
||||
## YAML Format (Recommended)
|
||||
## Directory Layout (Recommended)
|
||||
|
||||
# .takt/tasks/my-task.yaml
|
||||
task: "Task description"
|
||||
worktree: true # (optional) true | "/path/to/dir"
|
||||
branch: "feat/my-feature" # (optional) branch name
|
||||
piece: "default" # (optional) piece name
|
||||
.takt/
|
||||
tasks/
|
||||
20260201-015714-foptng/
|
||||
order.md
|
||||
schema.sql
|
||||
wireframe.png
|
||||
|
||||
- Directory name should match the report directory slug.
|
||||
- `order.md` is required.
|
||||
- Other files are optional reference materials.
|
||||
|
||||
## tasks.yaml Format
|
||||
|
||||
Store task metadata in `.takt/tasks.yaml`, and point to the task directory with `task_dir`.
|
||||
|
||||
tasks:
|
||||
- name: add-auth-feature
|
||||
status: pending
|
||||
task_dir: .takt/tasks/20260201-015714-foptng
|
||||
piece: default
|
||||
created_at: "2026-02-01T01:57:14.000Z"
|
||||
started_at: null
|
||||
completed_at: null
|
||||
|
||||
Fields:
|
||||
task (required) Task description (string)
|
||||
worktree (optional) true: create shared clone, "/path": clone at path
|
||||
branch (optional) Branch name (auto-generated if omitted: takt/{timestamp}-{slug})
|
||||
piece (optional) Piece name (uses current piece if omitted)
|
||||
task_dir (recommended) Path to task directory that contains `order.md`
|
||||
content (legacy) Inline task text (kept for compatibility)
|
||||
content_file (legacy) Path to task text file (kept for compatibility)
|
||||
|
||||
## Markdown Format (Simple)
|
||||
## Command Behavior
|
||||
|
||||
# .takt/tasks/my-task.md
|
||||
|
||||
Entire file content becomes the task description.
|
||||
Supports multiline. No structured options available.
|
||||
|
||||
## Supported Extensions
|
||||
|
||||
.yaml, .yml -> YAML format (parsed and validated)
|
||||
.md -> Markdown format (plain text, backward compatible)
|
||||
- `takt add` creates `.takt/tasks/{slug}/order.md` automatically.
|
||||
- `takt run` and `takt watch` read `.takt/tasks.yaml` and resolve `task_dir`.
|
||||
- Report output is written to `.takt/runs/{slug}/reports/`.
|
||||
|
||||
## Commands
|
||||
|
||||
takt /add-task Add a task interactively
|
||||
takt /run-tasks Run all pending tasks
|
||||
takt /watch Watch and auto-run tasks
|
||||
takt /list-tasks List task branches (merge/delete)
|
||||
takt add Add a task and create task directory
|
||||
takt run Run all pending tasks in tasks.yaml
|
||||
takt watch Watch tasks.yaml and run pending tasks
|
||||
takt list List task branches (merge/delete)
|
||||
|
||||
@ -83,7 +83,7 @@ $ARGUMENTS を以下のように解析する:
|
||||
3. 見つからない場合: 上記2ディレクトリを Glob で列挙し、AskUserQuestion で選択させる
|
||||
|
||||
YAMLから以下を抽出する(→ references/yaml-schema.md 参照):
|
||||
- `name`, `max_iterations`, `initial_movement`, `movements` 配列
|
||||
- `name`, `max_movements`, `initial_movement`, `movements` 配列
|
||||
- セクションマップ: `personas`, `policies`, `instructions`, `output_contracts`, `knowledge`
|
||||
|
||||
### 手順 2: セクションリソースの事前読み込み
|
||||
@ -116,13 +116,21 @@ TeamCreate tool を呼ぶ:
|
||||
- `permission_mode = コマンドで解析された権限モード("bypassPermissions" / "acceptEdits" / "default")`
|
||||
- `movement_history = []`(遷移履歴。Loop Monitor 用)
|
||||
|
||||
**レポートディレクトリ**: いずれかの movement に `report` フィールドがある場合、`.takt/reports/{YYYYMMDD-HHmmss}-{slug}/` を作成し、パスを `report_dir` 変数に保持する。
|
||||
**実行ディレクトリ**: いずれかの movement に `report` フィールドがある場合、`.takt/runs/{YYYYMMDD-HHmmss}-{slug}/` を作成し、以下を配置する。
|
||||
- `reports/`(レポート出力)
|
||||
- `context/knowledge/`(Knowledge スナップショット)
|
||||
- `context/policy/`(Policy スナップショット)
|
||||
- `context/previous_responses/`(Previous Response 履歴 + `latest.md`)
|
||||
- `logs/`(実行ログ)
|
||||
- `meta.json`(run メタデータ)
|
||||
|
||||
レポート出力先パスを `report_dir` 変数(`.takt/runs/{slug}/reports`)として保持する。
|
||||
|
||||
次に **手順 5** に進む。
|
||||
|
||||
### 手順 5: チームメイト起動
|
||||
|
||||
**iteration が max_iterations を超えていたら → 手順 8(ABORT: イテレーション上限)に進む。**
|
||||
**iteration が max_movements を超えていたら → 手順 8(ABORT: イテレーション上限)に進む。**
|
||||
|
||||
current_movement のプロンプトを構築する(→ references/engine.md のプロンプト構築を参照)。
|
||||
|
||||
|
||||
@ -133,7 +133,7 @@ movement の `instruction:` キーから指示テンプレートファイルを
|
||||
- ワーキングディレクトリ: {cwd}
|
||||
- ピース: {piece_name}
|
||||
- Movement: {movement_name}
|
||||
- イテレーション: {iteration} / {max_iterations}
|
||||
- イテレーション: {iteration} / {max_movements}
|
||||
- Movement イテレーション: {movement_iteration} 回目
|
||||
```
|
||||
|
||||
@ -146,9 +146,9 @@ movement の `instruction:` キーから指示テンプレートファイルを
|
||||
| `{task}` | ユーザーが入力したタスク内容 |
|
||||
| `{previous_response}` | 前の movement のチームメイト出力 |
|
||||
| `{iteration}` | ピース全体のイテレーション数(1始まり) |
|
||||
| `{max_iterations}` | ピースの max_iterations 値 |
|
||||
| `{max_movements}` | ピースの max_movements 値 |
|
||||
| `{movement_iteration}` | この movement が実行された回数(1始まり) |
|
||||
| `{report_dir}` | レポートディレクトリパス |
|
||||
| `{report_dir}` | レポートディレクトリパス(`.takt/runs/{slug}/reports`) |
|
||||
| `{report:ファイル名}` | 指定レポートファイルの内容(Read で取得) |
|
||||
|
||||
### {report:ファイル名} の処理
|
||||
@ -212,7 +212,10 @@ report:
|
||||
チームメイトの出力からレポート内容を抽出し、Write tool でレポートディレクトリに保存する。
|
||||
**この作業は Team Lead(あなた)が行う。** チームメイトの出力を受け取った後に実施する。
|
||||
|
||||
**レポートディレクトリ**: `.takt/reports/{timestamp}-{slug}/` に作成する。
|
||||
**実行ディレクトリ**: `.takt/runs/{timestamp}-{slug}/` に作成する。
|
||||
- レポートは `.takt/runs/{timestamp}-{slug}/reports/` に保存する。
|
||||
- `Knowledge` / `Policy` / `Previous Response` は `.takt/runs/{timestamp}-{slug}/context/` 配下に保存する。
|
||||
- 最新の previous response は `.takt/runs/{timestamp}-{slug}/context/previous_responses/latest.md` とする。
|
||||
- `{timestamp}`: `YYYYMMDD-HHmmss` 形式
|
||||
- `{slug}`: タスク内容の先頭30文字をスラグ化
|
||||
|
||||
@ -314,7 +317,7 @@ parallel のサブステップにも同様にタグ出力指示を注入する
|
||||
### 基本ルール
|
||||
|
||||
- 同じ movement が連続3回以上実行されたら警告を表示する
|
||||
- `max_iterations` に到達したら強制終了(ABORT)する
|
||||
- `max_movements` に到達したら強制終了(ABORT)する
|
||||
|
||||
### カウンター管理
|
||||
|
||||
@ -358,17 +361,24 @@ loop_monitors:
|
||||
d. judge の出力を judge の `rules` で評価する
|
||||
e. マッチした rule の `next` に遷移する(通常のルール評価をオーバーライドする)
|
||||
|
||||
## レポート管理
|
||||
## 実行アーティファクト管理
|
||||
|
||||
### レポートディレクトリの作成
|
||||
### 実行ディレクトリの作成
|
||||
|
||||
ピース実行開始時にレポートディレクトリを作成する:
|
||||
ピース実行開始時に実行ディレクトリを作成する:
|
||||
|
||||
```
|
||||
.takt/reports/{YYYYMMDD-HHmmss}-{slug}/
|
||||
.takt/runs/{YYYYMMDD-HHmmss}-{slug}/
|
||||
reports/
|
||||
context/
|
||||
knowledge/
|
||||
policy/
|
||||
previous_responses/
|
||||
logs/
|
||||
meta.json
|
||||
```
|
||||
|
||||
このパスを `{report_dir}` 変数として全 movement から参照可能にする。
|
||||
このうち `reports/` のパスを `{report_dir}` 変数として全 movement から参照可能にする。
|
||||
|
||||
### レポートの保存
|
||||
|
||||
@ -392,7 +402,7 @@ loop_monitors:
|
||||
↓
|
||||
TeamCreate でチーム作成
|
||||
↓
|
||||
レポートディレクトリ作成
|
||||
実行ディレクトリ作成
|
||||
↓
|
||||
initial_movement を取得
|
||||
↓
|
||||
|
||||
@ -7,7 +7,7 @@
|
||||
```yaml
|
||||
name: piece-name # ピース名(必須)
|
||||
description: 説明テキスト # ピースの説明(任意)
|
||||
max_iterations: 10 # 最大イテレーション数(必須)
|
||||
max_movements: 10 # 最大イテレーション数(必須)
|
||||
initial_movement: plan # 最初に実行する movement 名(必須)
|
||||
|
||||
# セクションマップ(キー → ファイルパスの対応表)
|
||||
@ -192,7 +192,7 @@ quality_gates:
|
||||
| `{task}` | ユーザーのタスク入力(template に含まれない場合は自動追加) |
|
||||
| `{previous_response}` | 前の movement の出力(pass_previous_response: true 時、自動追加) |
|
||||
| `{iteration}` | ピース全体のイテレーション数 |
|
||||
| `{max_iterations}` | 最大イテレーション数 |
|
||||
| `{max_movements}` | 最大イテレーション数 |
|
||||
| `{movement_iteration}` | この movement の実行回数 |
|
||||
| `{report_dir}` | レポートディレクトリ名 |
|
||||
| `{report:ファイル名}` | 指定レポートファイルの内容を展開 |
|
||||
|
||||
@ -2,7 +2,7 @@
|
||||
|
||||
**T**ask **A**gent **K**oordination **T**ool - AIエージェントの協調手順・人の介入ポイント・記録をYAMLで定義する
|
||||
|
||||
TAKTは複数のAIエージェント(Claude Code、Codex)をYAMLで定義されたワークフローに従って実行します。各ステップで誰が実行し、何を見て、何を許可し、失敗時にどうするかはピースファイルに宣言され、エージェント任せにしません。
|
||||
TAKTは複数のAIエージェント(Claude Code、Codex、OpenCode)をYAMLで定義されたワークフローに従って実行します。各ステップで誰が実行し、何を見て、何を許可し、失敗時にどうするかはピースファイルに宣言され、エージェント任せにしません。
|
||||
|
||||
TAKTはTAKT自身で開発されています(ドッグフーディング)。
|
||||
|
||||
@ -45,14 +45,14 @@ TAKTはエージェントの実行を**制御**し、プロンプトの構成要
|
||||
|
||||
次のいずれかを選択してください。
|
||||
|
||||
- **プロバイダーCLIを使用**: [Claude Code](https://docs.anthropic.com/en/docs/claude-code) または [Codex](https://github.com/openai/codex) をインストール
|
||||
- **API直接利用**: **Anthropic API Key** または **OpenAI API Key**(CLI不要)
|
||||
- **プロバイダーCLIを使用**: [Claude Code](https://docs.anthropic.com/en/docs/claude-code)、[Codex](https://github.com/openai/codex)、または [OpenCode](https://opencode.ai) をインストール
|
||||
- **API直接利用**: **Anthropic API Key**、**OpenAI API Key**、または **OpenCode API Key**(CLI不要)
|
||||
|
||||
追加で必要なもの:
|
||||
|
||||
- [GitHub CLI](https://cli.github.com/) (`gh`) — `takt #N`(GitHub Issue実行)を使う場合のみ必要
|
||||
|
||||
**料金について**: API Key を使用する場合、TAKT は Claude API(Anthropic)または OpenAI API を直接呼び出します。料金体系は Claude Code や Codex を使った場合と同じです。特に CI/CD で自動実行する場合、API 使用量が増えるため、コストに注意してください。
|
||||
**料金について**: API Key を使用する場合、TAKT は Claude API(Anthropic)、OpenAI API、または OpenCode API を直接呼び出します。料金体系は各 CLI ツールを使った場合と同じです。特に CI/CD で自動実行する場合、API 使用量が増えるため、コストに注意してください。
|
||||
|
||||
## インストール
|
||||
|
||||
@ -186,7 +186,7 @@ takt #6 --auto-pr
|
||||
|
||||
### タスク管理(add / run / watch / list)
|
||||
|
||||
タスクファイル(`.takt/tasks/`)を使ったバッチ処理。複数のタスクを積んでおいて、後でまとめて実行する使い方に便利です。
|
||||
`.takt/tasks.yaml` と `.takt/tasks/{slug}/` を使ったバッチ処理。複数のタスクを積んでおいて、後でまとめて実行する使い方に便利です。
|
||||
|
||||
#### タスクを追加(`takt add`)
|
||||
|
||||
@ -201,14 +201,14 @@ takt add #28
|
||||
#### タスクを実行(`takt run`)
|
||||
|
||||
```bash
|
||||
# .takt/tasks/ の保留中タスクをすべて実行
|
||||
# .takt/tasks.yaml の保留中タスクをすべて実行
|
||||
takt run
|
||||
```
|
||||
|
||||
#### タスクを監視(`takt watch`)
|
||||
|
||||
```bash
|
||||
# .takt/tasks/ を監視してタスクを自動実行(常駐プロセス)
|
||||
# .takt/tasks.yaml を監視してタスクを自動実行(常駐プロセス)
|
||||
takt watch
|
||||
```
|
||||
|
||||
@ -225,6 +225,13 @@ takt list --non-interactive --action delete --branch takt/my-branch --yes
|
||||
takt list --non-interactive --format json
|
||||
```
|
||||
|
||||
#### タスクディレクトリ運用(作成・実行・確認)
|
||||
|
||||
1. `takt add` を実行して `.takt/tasks.yaml` に pending レコードが作られることを確認する。
|
||||
2. 生成された `.takt/tasks/{slug}/order.md` を開き、必要なら仕様や参考資料を追記する。
|
||||
3. `takt run`(または `takt watch`)で `tasks.yaml` の pending タスクを実行する。
|
||||
4. `task_dir` と同じスラッグの `.takt/runs/{slug}/reports/` を確認する。
|
||||
|
||||
### パイプラインモード(CI/自動化向け)
|
||||
|
||||
`--pipeline` を指定すると非対話のパイプラインモードに入ります。ブランチ作成 → ピース実行 → commit & push を自動で行います。CI/CD での自動化に適しています。
|
||||
@ -315,7 +322,7 @@ takt reset categories
|
||||
| `--repo <owner/repo>` | リポジトリ指定(PR作成時) |
|
||||
| `--create-worktree <yes\|no>` | worktree確認プロンプトをスキップ |
|
||||
| `-q, --quiet` | 最小限の出力モード: AIの出力を抑制(CI向け) |
|
||||
| `--provider <name>` | エージェントプロバイダーを上書き(claude\|codex\|mock) |
|
||||
| `--provider <name>` | エージェントプロバイダーを上書き(claude\|codex\|opencode\|mock) |
|
||||
| `--model <name>` | エージェントモデルを上書き |
|
||||
|
||||
## ピース
|
||||
@ -328,7 +335,7 @@ TAKTはYAMLベースのピース定義とルールベースルーティングを
|
||||
|
||||
```yaml
|
||||
name: default
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
initial_movement: plan
|
||||
|
||||
# セクションマップ — キー: ファイルパス(このYAMLからの相対パス)
|
||||
@ -466,6 +473,7 @@ TAKTには複数のビルトインピースが同梱されています:
|
||||
| `structural-reform` | プロジェクト全体の構造改革: 段階的なファイル分割を伴う反復的なコードベース再構成。 |
|
||||
| `unit-test` | ユニットテスト重視ピース: テスト分析 → テスト実装 → レビュー → 修正。 |
|
||||
| `e2e-test` | E2Eテスト重視ピース: E2E分析 → E2E実装 → レビュー → 修正(VitestベースのE2Eフロー)。 |
|
||||
| `frontend` | フロントエンド特化開発ピース: React/Next.js 向けのレビューとナレッジ注入。 |
|
||||
|
||||
**ペルソナ別プロバイダー設定:** 設定ファイルの `persona_providers` で、特定のペルソナを異なるプロバイダーにルーティングできます(例: coder は Codex、レビュアーは Claude)。ピースを複製する必要はありません。
|
||||
|
||||
@ -532,14 +540,14 @@ Claude Code はエイリアス(`opus`、`sonnet`、`haiku`、`opusplan`、`def
|
||||
|
||||
.takt/ # プロジェクトレベルの設定
|
||||
├── config.yaml # プロジェクト設定(現在のピース等)
|
||||
├── tasks/ # 保留中のタスクファイル(.yaml, .md)
|
||||
├── completed/ # 完了したタスクとレポート
|
||||
├── reports/ # 実行レポート(自動生成)
|
||||
│ └── {timestamp}-{slug}/
|
||||
└── logs/ # NDJSON 形式のセッションログ
|
||||
├── latest.json # 現在/最新セッションへのポインタ
|
||||
├── previous.json # 前回セッションへのポインタ
|
||||
└── {sessionId}.jsonl # ピース実行ごとの NDJSON セッションログ
|
||||
├── tasks/ # タスク入力ディレクトリ(.takt/tasks/{slug}/order.md など)
|
||||
├── tasks.yaml # 保留中タスクのメタデータ(task_dir, piece, worktree など)
|
||||
└── runs/ # 実行単位の成果物
|
||||
└── {slug}/
|
||||
├── reports/ # 実行レポート(自動生成)
|
||||
├── context/ # knowledge/policy/previous_response のスナップショット
|
||||
├── logs/ # この実行専用の NDJSON セッションログ
|
||||
└── meta.json # run メタデータ
|
||||
```
|
||||
|
||||
ビルトインリソースはnpmパッケージ(`builtins/`)に埋め込まれています。`~/.takt/` のユーザーファイルが優先されます。
|
||||
@ -553,11 +561,17 @@ Claude Code はエイリアス(`opus`、`sonnet`、`haiku`、`opusplan`、`def
|
||||
language: ja
|
||||
default_piece: default
|
||||
log_level: info
|
||||
provider: claude # デフォルトプロバイダー: claude または codex
|
||||
provider: claude # デフォルトプロバイダー: claude、codex、または opencode
|
||||
model: sonnet # デフォルトモデル(オプション)
|
||||
branch_name_strategy: romaji # ブランチ名生成: 'romaji'(高速)または 'ai'(低速)
|
||||
prevent_sleep: false # macOS の実行中スリープ防止(caffeinate)
|
||||
notification_sound: true # 通知音の有効/無効
|
||||
notification_sound_events: # タイミング別の通知音制御
|
||||
iteration_limit: false
|
||||
piece_complete: true
|
||||
piece_abort: true
|
||||
run_complete: true # 未設定時は有効。false を指定すると無効
|
||||
run_abort: true # 未設定時は有効。false を指定すると無効
|
||||
concurrency: 1 # takt run の並列タスク数(1-10、デフォルト: 1 = 逐次実行)
|
||||
task_poll_interval_ms: 500 # takt run 中の新タスク検出ポーリング間隔(100-5000、デフォルト: 500)
|
||||
interactive_preview_movements: 3 # 対話モードでのムーブメントプレビュー数(0-10、デフォルト: 3)
|
||||
@ -569,9 +583,10 @@ interactive_preview_movements: 3 # 対話モードでのムーブメントプ
|
||||
# ai-antipattern-reviewer: claude # レビュアーは Claude のまま
|
||||
|
||||
# API Key 設定(オプション)
|
||||
# 環境変数 TAKT_ANTHROPIC_API_KEY / TAKT_OPENAI_API_KEY で上書き可能
|
||||
# 環境変数 TAKT_ANTHROPIC_API_KEY / TAKT_OPENAI_API_KEY / TAKT_OPENCODE_API_KEY で上書き可能
|
||||
anthropic_api_key: sk-ant-... # Claude (Anthropic) を使う場合
|
||||
# openai_api_key: sk-... # Codex (OpenAI) を使う場合
|
||||
# opencode_api_key: ... # OpenCode を使う場合
|
||||
|
||||
# ビルトインピースのフィルタリング(オプション)
|
||||
# builtin_pieces_enabled: true # false でビルトイン全体を無効化
|
||||
@ -595,17 +610,17 @@ anthropic_api_key: sk-ant-... # Claude (Anthropic) を使う場合
|
||||
1. **環境変数で設定**:
|
||||
```bash
|
||||
export TAKT_ANTHROPIC_API_KEY=sk-ant-... # Claude の場合
|
||||
# または
|
||||
export TAKT_OPENAI_API_KEY=sk-... # Codex の場合
|
||||
export TAKT_OPENCODE_API_KEY=... # OpenCode の場合
|
||||
```
|
||||
|
||||
2. **設定ファイルで設定**:
|
||||
上記の `~/.takt/config.yaml` に `anthropic_api_key` または `openai_api_key` を記述
|
||||
上記の `~/.takt/config.yaml` に `anthropic_api_key`、`openai_api_key`、または `opencode_api_key` を記述
|
||||
|
||||
優先順位: 環境変数 > `config.yaml` の設定
|
||||
|
||||
**注意事項:**
|
||||
- API Key を設定した場合、Claude Code や Codex のインストールは不要です。TAKT が直接 Anthropic API または OpenAI API を呼び出します。
|
||||
- API Key を設定した場合、Claude Code、Codex、OpenCode のインストールは不要です。TAKT が直接各 API を呼び出します。
|
||||
- **セキュリティ**: `config.yaml` に API Key を記述した場合、このファイルを Git にコミットしないよう注意してください。環境変数での設定を使うか、`.gitignore` に `~/.takt/config.yaml` を追加することを検討してください。
|
||||
|
||||
**パイプラインテンプレート変数:**
|
||||
@ -621,36 +636,43 @@ anthropic_api_key: sk-ant-... # Claude (Anthropic) を使う場合
|
||||
1. ピースのムーブメントの `model`(最優先)
|
||||
2. カスタムエージェントの `model`
|
||||
3. グローバル設定の `model`
|
||||
4. プロバイダーデフォルト(Claude: sonnet、Codex: codex)
|
||||
4. プロバイダーデフォルト(Claude: sonnet、Codex: codex、OpenCode: プロバイダーデフォルト)
|
||||
|
||||
## 詳細ガイド
|
||||
|
||||
### タスクファイルの形式
|
||||
### タスクディレクトリ形式
|
||||
|
||||
TAKT は `.takt/tasks/` 内のタスクファイルによるバッチ処理をサポートしています。`.yaml`/`.yml` と `.md` の両方のファイル形式に対応しています。
|
||||
TAKT は `.takt/tasks.yaml` にタスクのメタデータを保存し、長文仕様は `.takt/tasks/{slug}/` に分離して管理します。
|
||||
|
||||
**YAML形式**(推奨、worktree/branch/pieceオプション対応):
|
||||
**推奨構成**:
|
||||
|
||||
```text
|
||||
.takt/
|
||||
tasks/
|
||||
20260201-015714-foptng/
|
||||
order.md
|
||||
schema.sql
|
||||
wireframe.png
|
||||
tasks.yaml
|
||||
runs/
|
||||
20260201-015714-foptng/
|
||||
reports/
|
||||
```
|
||||
|
||||
**tasks.yaml レコード例**:
|
||||
|
||||
```yaml
|
||||
# .takt/tasks/add-auth.yaml
|
||||
task: "認証機能を追加する"
|
||||
worktree: true # 隔離された共有クローンで実行
|
||||
branch: "feat/add-auth" # ブランチ名(省略時は自動生成)
|
||||
piece: "default" # ピース指定(省略時は現在のもの)
|
||||
tasks:
|
||||
- name: add-auth-feature
|
||||
status: pending
|
||||
task_dir: .takt/tasks/20260201-015714-foptng
|
||||
piece: default
|
||||
created_at: "2026-02-01T01:57:14.000Z"
|
||||
started_at: null
|
||||
completed_at: null
|
||||
```
|
||||
|
||||
**Markdown形式**(シンプル、後方互換):
|
||||
|
||||
```markdown
|
||||
# .takt/tasks/add-login-feature.md
|
||||
|
||||
アプリケーションにログイン機能を追加する。
|
||||
|
||||
要件:
|
||||
- ユーザー名とパスワードフィールド
|
||||
- フォームバリデーション
|
||||
- 失敗時のエラーハンドリング
|
||||
```
|
||||
`takt add` は `.takt/tasks/{slug}/order.md` を自動生成し、`tasks.yaml` には `task_dir` を保存します。
|
||||
|
||||
#### 共有クローンによる隔離実行
|
||||
|
||||
@ -667,15 +689,14 @@ YAMLタスクファイルで`worktree`を指定すると、各タスクを`git c
|
||||
|
||||
### セッションログ
|
||||
|
||||
TAKTはセッションログをNDJSON(`.jsonl`)形式で`.takt/logs/`に書き込みます。各レコードはアトミックに追記されるため、プロセスが途中でクラッシュしても部分的なログが保持され、`tail -f`でリアルタイムに追跡できます。
|
||||
TAKTはセッションログをNDJSON(`.jsonl`)形式で`.takt/runs/{slug}/logs/`に書き込みます。各レコードはアトミックに追記されるため、プロセスが途中でクラッシュしても部分的なログが保持され、`tail -f`でリアルタイムに追跡できます。
|
||||
|
||||
- `.takt/logs/latest.json` - 現在(または最新の)セッションへのポインタ
|
||||
- `.takt/logs/previous.json` - 前回セッションへのポインタ
|
||||
- `.takt/logs/{sessionId}.jsonl` - ピース実行ごとのNDJSONセッションログ
|
||||
- `.takt/runs/{slug}/logs/{sessionId}.jsonl` - ピース実行ごとのNDJSONセッションログ
|
||||
- `.takt/runs/{slug}/meta.json` - run メタデータ(`task`, `piece`, `start/end`, `status` など)
|
||||
|
||||
レコード種別: `piece_start`, `step_start`, `step_complete`, `piece_complete`, `piece_abort`
|
||||
|
||||
エージェントは`previous.json`を読み取って前回の実行コンテキストを引き継ぐことができます。セッション継続は自動的に行われます — `takt "タスク"`を実行するだけで前回のセッションから続行されます。
|
||||
最新の previous response は `.takt/runs/{slug}/context/previous_responses/latest.md` に保存され、実行時に自動的に引き継がれます。
|
||||
|
||||
### カスタムピースの追加
|
||||
|
||||
@ -690,7 +711,7 @@ takt eject default
|
||||
# ~/.takt/pieces/my-piece.yaml
|
||||
name: my-piece
|
||||
description: カスタムピース
|
||||
max_iterations: 5
|
||||
max_movements: 5
|
||||
initial_movement: analyze
|
||||
|
||||
personas:
|
||||
@ -740,11 +761,11 @@ personas:
|
||||
|------|------|
|
||||
| `{task}` | 元のユーザーリクエスト(テンプレートになければ自動注入) |
|
||||
| `{iteration}` | ピース全体のターン数(実行された全ムーブメント数) |
|
||||
| `{max_iterations}` | 最大イテレーション数 |
|
||||
| `{max_movements}` | 最大イテレーション数 |
|
||||
| `{movement_iteration}` | ムーブメントごとのイテレーション数(このムーブメントが実行された回数) |
|
||||
| `{previous_response}` | 前のムーブメントの出力(テンプレートになければ自動注入) |
|
||||
| `{user_inputs}` | ピース中の追加ユーザー入力(テンプレートになければ自動注入) |
|
||||
| `{report_dir}` | レポートディレクトリパス(例: `.takt/reports/20250126-143052-task-summary`) |
|
||||
| `{report_dir}` | レポートディレクトリパス(例: `.takt/runs/20250126-143052-task-summary/reports`) |
|
||||
| `{report:filename}` | `{report_dir}/filename` に展開(例: `{report:00-plan.md}`) |
|
||||
|
||||
### ピースの設計
|
||||
@ -777,7 +798,7 @@ rules:
|
||||
| `edit` | - | ムーブメントがプロジェクトファイルを編集できるか(`true`/`false`) |
|
||||
| `pass_previous_response` | `true` | 前のムーブメントの出力を`{previous_response}`に渡す |
|
||||
| `allowed_tools` | - | エージェントが使用できるツール一覧(Read, Glob, Grep, Edit, Write, Bash等) |
|
||||
| `provider` | - | このムーブメントのプロバイダーを上書き(`claude`または`codex`) |
|
||||
| `provider` | - | このムーブメントのプロバイダーを上書き(`claude`、`codex`、または`opencode`) |
|
||||
| `model` | - | このムーブメントのモデルを上書き |
|
||||
| `permission_mode` | - | パーミッションモード: `readonly`、`edit`、`full`(プロバイダー非依存) |
|
||||
| `output_contracts` | - | レポートファイルの出力契約定義 |
|
||||
@ -855,7 +876,7 @@ npm install -g takt
|
||||
takt --pipeline --task "バグ修正" --auto-pr --repo owner/repo
|
||||
```
|
||||
|
||||
認証には `TAKT_ANTHROPIC_API_KEY` または `TAKT_OPENAI_API_KEY` 環境変数を設定してください(TAKT 独自のプレフィックス付き)。
|
||||
認証には `TAKT_ANTHROPIC_API_KEY`、`TAKT_OPENAI_API_KEY`、または `TAKT_OPENCODE_API_KEY` 環境変数を設定してください(TAKT 独自のプレフィックス付き)。
|
||||
|
||||
```bash
|
||||
# Claude (Anthropic) を使う場合
|
||||
@ -863,6 +884,9 @@ export TAKT_ANTHROPIC_API_KEY=sk-ant-...
|
||||
|
||||
# Codex (OpenAI) を使う場合
|
||||
export TAKT_OPENAI_API_KEY=sk-...
|
||||
|
||||
# OpenCode を使う場合
|
||||
export TAKT_OPENCODE_API_KEY=...
|
||||
```
|
||||
|
||||
## ドキュメント
|
||||
|
||||
@ -431,7 +431,7 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
||||
2. **ログ初期化**:
|
||||
- `createSessionLog()`: セッションログオブジェクト作成
|
||||
- `initNdjsonLog()`: NDJSON形式のログファイル初期化
|
||||
- `updateLatestPointer()`: `latest.json` ポインタ更新
|
||||
- `meta.json` 更新: 実行ステータス(running/completed/aborted)と時刻を保存
|
||||
|
||||
3. **PieceEngine初期化**:
|
||||
```typescript
|
||||
@ -498,7 +498,7 @@ TAKTのデータフローは以下の7つの主要なレイヤーで構成され
|
||||
while (state.status === 'running') {
|
||||
// 1. Abort & Iteration チェック
|
||||
if (abortRequested) { ... }
|
||||
if (iteration >= maxIterations) { ... }
|
||||
if (iteration >= maxMovements) { ... }
|
||||
|
||||
// 2. ステップ取得
|
||||
const step = getStep(state.currentStep);
|
||||
@ -619,6 +619,7 @@ const match = await detectMatchedRule(step, response.content, tagContent, {...})
|
||||
- Step Iteration (per-step)
|
||||
- Step name
|
||||
- Report Directory/File info
|
||||
- Run Source Paths (`.takt/runs/{slug}/context/...`)
|
||||
|
||||
3. **User Request** (タスク本文):
|
||||
- `{task}` プレースホルダーがテンプレートにない場合のみ自動注入
|
||||
@ -626,6 +627,8 @@ const match = await detectMatchedRule(step, response.content, tagContent, {...})
|
||||
4. **Previous Response** (前ステップの出力):
|
||||
- `step.passPreviousResponse === true` かつ
|
||||
- `{previous_response}` プレースホルダーがテンプレートにない場合のみ自動注入
|
||||
- 長さ制御(2000 chars)と `...TRUNCATED...` を適用
|
||||
- Source Path を常時注入
|
||||
|
||||
5. **Additional User Inputs** (blocked時の追加入力):
|
||||
- `{user_inputs}` プレースホルダーがテンプレートにない場合のみ自動注入
|
||||
@ -643,7 +646,7 @@ const match = await detectMatchedRule(step, response.content, tagContent, {...})
|
||||
- `{previous_response}`: 前ステップの出力
|
||||
- `{user_inputs}`: 追加ユーザー入力
|
||||
- `{iteration}`: ピース全体のイテレーション
|
||||
- `{max_iterations}`: 最大イテレーション
|
||||
- `{max_movements}`: 最大イテレーション
|
||||
- `{step_iteration}`: ステップのイテレーション
|
||||
- `{report_dir}`: レポートディレクトリ
|
||||
|
||||
@ -821,7 +824,7 @@ new PieceEngine(pieceConfig, cwd, task, {
|
||||
|
||||
1. **コンテキスト収集**:
|
||||
- `task`: 元のユーザーリクエスト
|
||||
- `iteration`, `maxIterations`: イテレーション情報
|
||||
- `iteration`, `maxMovements`: イテレーション情報
|
||||
- `stepIteration`: ステップごとの実行回数
|
||||
- `cwd`, `projectCwd`: ディレクトリ情報
|
||||
- `userInputs`: blocked時の追加入力
|
||||
|
||||
@ -331,7 +331,7 @@ Faceted Promptingの中核メカニズムは**宣言的な合成**である。
|
||||
|
||||
```yaml
|
||||
name: my-workflow
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
initial_movement: plan
|
||||
|
||||
movements:
|
||||
|
||||
@ -331,7 +331,7 @@ Key properties:
|
||||
|
||||
```yaml
|
||||
name: my-workflow
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
initial_movement: plan
|
||||
|
||||
movements:
|
||||
|
||||
@ -25,7 +25,7 @@ A piece is a YAML file that defines a sequence of steps executed by AI agents. E
|
||||
```yaml
|
||||
name: my-piece
|
||||
description: Optional description
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
initial_step: first-step # Optional, defaults to first step
|
||||
|
||||
steps:
|
||||
@ -55,11 +55,11 @@ steps:
|
||||
|----------|-------------|
|
||||
| `{task}` | Original user request (auto-injected if not in template) |
|
||||
| `{iteration}` | Piece-wide turn count (total steps executed) |
|
||||
| `{max_iterations}` | Maximum iterations allowed |
|
||||
| `{max_movements}` | Maximum movements allowed |
|
||||
| `{step_iteration}` | Per-step iteration count (how many times THIS step has run) |
|
||||
| `{previous_response}` | Previous step's output (auto-injected if not in template) |
|
||||
| `{user_inputs}` | Additional user inputs during piece (auto-injected if not in template) |
|
||||
| `{report_dir}` | Report directory path (e.g., `.takt/reports/20250126-143052-task-summary`) |
|
||||
| `{report_dir}` | Report directory path (e.g., `.takt/runs/20250126-143052-task-summary/reports`) |
|
||||
| `{report:filename}` | Resolves to `{report_dir}/filename` (e.g., `{report:00-plan.md}`) |
|
||||
|
||||
> **Note**: `{task}`, `{previous_response}`, and `{user_inputs}` are auto-injected into instructions. You only need explicit placeholders if you want to control their position in the template.
|
||||
@ -170,7 +170,7 @@ report:
|
||||
|
||||
```yaml
|
||||
name: simple-impl
|
||||
max_iterations: 5
|
||||
max_movements: 5
|
||||
|
||||
steps:
|
||||
- name: implement
|
||||
@ -191,7 +191,7 @@ steps:
|
||||
|
||||
```yaml
|
||||
name: with-review
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
|
||||
steps:
|
||||
- name: implement
|
||||
|
||||
@ -5,7 +5,8 @@ E2Eテストを追加・変更した場合は、このドキュメントも更
|
||||
## 前提条件
|
||||
- `gh` CLI が利用可能で、対象GitHubアカウントでログイン済みであること。
|
||||
- `takt-testing` リポジトリが対象アカウントに存在すること(E2Eがクローンして使用)。
|
||||
- 必要に応じて `TAKT_E2E_PROVIDER` を設定すること(例: `claude` / `codex`)。
|
||||
- 必要に応じて `TAKT_E2E_PROVIDER` を設定すること(例: `claude` / `codex` / `opencode`)。
|
||||
- `TAKT_E2E_PROVIDER=opencode` の場合は `TAKT_E2E_MODEL` が必須(例: `opencode/big-pickle`)。
|
||||
- 実行時間が長いテストがあるため、タイムアウトに注意すること。
|
||||
- E2Eは `e2e/helpers/test-repo.ts` が `gh` でリポジトリをクローンし、テンポラリディレクトリで実行する。
|
||||
- 対話UIを避けるため、E2E環境では `TAKT_NO_TTY=1` を設定してTTYを無効化する。
|
||||
@ -13,26 +14,35 @@ E2Eテストを追加・変更した場合は、このドキュメントも更
|
||||
- リポジトリクローン: `$(os.tmpdir())/takt-e2e-repo-<random>/`
|
||||
- 実行環境: `$(os.tmpdir())/takt-e2e-<runId>-<random>/`
|
||||
|
||||
## E2E用config.yaml
|
||||
- E2Eのグローバル設定は `e2e/fixtures/config.e2e.yaml` を基準に生成する。
|
||||
- `createIsolatedEnv()` は毎回一時ディレクトリ配下(`$TAKT_CONFIG_DIR/config.yaml`)にこの基準設定を書き出す。
|
||||
- 通知音は `notification_sound_events` でタイミング別に制御し、E2E既定では道中(`iteration_limit` / `piece_complete` / `piece_abort`)をOFF、全体終了時(`run_complete` / `run_abort`)のみONにする。
|
||||
- 各スペックで `provider` や `concurrency` を変更する場合は、`updateIsolatedConfig()` を使って差分のみ上書きする。
|
||||
- `~/.takt/config.yaml` はE2Eでは参照されないため、通常実行の設定には影響しない。
|
||||
|
||||
## 実行コマンド
|
||||
- `npm run test:e2e`: E2E全体を実行。
|
||||
- `npm run test:e2e:mock`: mock固定のE2Eのみ実行。
|
||||
- `npm run test:e2e:provider`: `claude` と `codex` の両方で実行。
|
||||
- `npm run test:e2e:provider:claude`: `TAKT_E2E_PROVIDER=claude` で実行。
|
||||
- `npm run test:e2e:provider:codex`: `TAKT_E2E_PROVIDER=codex` で実行。
|
||||
- `npm run test:e2e:provider:opencode`: `TAKT_E2E_PROVIDER=opencode` で実行(`TAKT_E2E_MODEL` 必須)。
|
||||
- `npm run test:e2e:all`: `mock` + `provider` を通しで実行。
|
||||
- `npm run test:e2e:claude`: `test:e2e:provider:claude` の別名。
|
||||
- `npm run test:e2e:codex`: `test:e2e:provider:codex` の別名。
|
||||
- `npm run test:e2e:opencode`: `test:e2e:provider:opencode` の別名。
|
||||
- `npx vitest run e2e/specs/add-and-run.e2e.ts`: 単体実行の例。
|
||||
|
||||
## シナリオ一覧
|
||||
- Add task and run(`e2e/specs/add-and-run.e2e.ts`)
|
||||
- 目的: `.takt/tasks/` にタスクYAMLを配置し、`takt run` が実行できることを確認。
|
||||
- 目的: `.takt/tasks.yaml` に pending タスクを配置し、`takt run` が実行できることを確認。
|
||||
- LLM: 条件付き(`TAKT_E2E_PROVIDER` が `claude` / `codex` の場合に呼び出す)
|
||||
- 手順(ユーザー行動/コマンド):
|
||||
- `.takt/tasks/e2e-test-task.yaml` にタスクを作成(`piece` は `e2e/fixtures/pieces/simple.yaml` を指定)。
|
||||
- `.takt/tasks.yaml` にタスクを作成(`piece` は `e2e/fixtures/pieces/simple.yaml` を指定)。
|
||||
- `takt run` を実行する。
|
||||
- `README.md` に行が追加されることを確認する。
|
||||
- タスクファイルが `tasks/` から移動されることを確認する。
|
||||
- 実行後にタスクが `tasks.yaml` から消えることを確認する。
|
||||
- Worktree/Clone isolation(`e2e/specs/worktree.e2e.ts`)
|
||||
- 目的: `--create-worktree yes` 指定で隔離環境に実行されることを確認。
|
||||
- LLM: 条件付き(`TAKT_E2E_PROVIDER` が `claude` / `codex` の場合に呼び出す)
|
||||
@ -83,13 +93,13 @@ E2Eテストを追加・変更した場合は、このドキュメントも更
|
||||
- `gh issue create ...` でIssueを作成する。
|
||||
- `TAKT_MOCK_SCENARIO=e2e/fixtures/scenarios/add-task.json` を設定する。
|
||||
- `takt add '#<issue>'` を実行し、`Create worktree?` に `n` で回答する。
|
||||
- `.takt/tasks/` にYAMLが生成されることを確認する。
|
||||
- `.takt/tasks.yaml` に `task_dir` が保存され、`.takt/tasks/{slug}/order.md` が生成されることを確認する。
|
||||
- Watch tasks(`e2e/specs/watch.e2e.ts`)
|
||||
- 目的: `takt watch` が監視中に追加されたタスクを実行できることを確認。
|
||||
- LLM: 呼び出さない(`--provider mock` 固定)
|
||||
- 手順(ユーザー行動/コマンド):
|
||||
- `takt watch --provider mock` を起動する。
|
||||
- `.takt/tasks/` にタスクYAMLを追加する(`piece` に `e2e/fixtures/pieces/mock-single-step.yaml` を指定)。
|
||||
- `.takt/tasks.yaml` に pending タスクを追加する(`piece` に `e2e/fixtures/pieces/mock-single-step.yaml` を指定)。
|
||||
- 出力に `Task "watch-task" completed` が含まれることを確認する。
|
||||
- `Ctrl+C` で終了する。
|
||||
- Run tasks graceful shutdown on SIGINT(`e2e/specs/run-sigint-graceful.e2e.ts`)
|
||||
@ -111,3 +121,27 @@ E2Eテストを追加・変更した場合は、このドキュメントも更
|
||||
- `takt list --non-interactive --action diff --branch <branch>` で差分統計が出力されることを確認する。
|
||||
- `takt list --non-interactive --action try --branch <branch>` で変更がステージされることを確認する。
|
||||
- `takt list --non-interactive --action merge --branch <branch>` でブランチがマージされ削除されることを確認する。
|
||||
- Config permission mode(`e2e/specs/cli-config.e2e.ts`)
|
||||
- 目的: `takt config` でパーミッションモードの切り替えと永続化を確認。
|
||||
- LLM: 呼び出さない(LLM不使用の操作のみ)
|
||||
- 手順(ユーザー行動/コマンド):
|
||||
- `takt config default` を実行し、`Switched to: default` が出力されることを確認する。
|
||||
- `takt config sacrifice-my-pc` を実行し、`Switched to: sacrifice-my-pc` が出力されることを確認する。
|
||||
- `takt config sacrifice-my-pc` 実行後、`.takt/config.yaml` に `permissionMode: sacrifice-my-pc` が保存されていることを確認する。
|
||||
- `takt config invalid-mode` を実行し、`Invalid mode` が出力されることを確認する。
|
||||
- Reset categories(`e2e/specs/cli-reset-categories.e2e.ts`)
|
||||
- 目的: `takt reset categories` でカテゴリオーバーレイのリセットを確認。
|
||||
- LLM: 呼び出さない(LLM不使用の操作のみ)
|
||||
- 手順(ユーザー行動/コマンド):
|
||||
- `takt reset categories` を実行する。
|
||||
- 出力に `reset` を含むことを確認する。
|
||||
- `$TAKT_CONFIG_DIR/preferences/piece-categories.yaml` が存在し `piece_categories: {}` を含むことを確認する。
|
||||
- Export Claude Code Skill(`e2e/specs/cli-export-cc.e2e.ts`)
|
||||
- 目的: `takt export-cc` でClaude Code Skillのデプロイを確認。
|
||||
- LLM: 呼び出さない(LLM不使用の操作のみ)
|
||||
- 手順(ユーザー行動/コマンド):
|
||||
- `HOME` を一時ディレクトリに設定する。
|
||||
- `takt export-cc` を実行する。
|
||||
- 出力に `ファイルをデプロイしました` を含むことを確認する。
|
||||
- `$HOME/.claude/skills/takt/SKILL.md` が存在することを確認する。
|
||||
- `$HOME/.claude/skills/takt/pieces/` および `$HOME/.claude/skills/takt/personas/` ディレクトリが存在し、それぞれ少なくとも1ファイルを含むことを確認する。
|
||||
|
||||
11
e2e/fixtures/config.e2e.yaml
Normal file
11
e2e/fixtures/config.e2e.yaml
Normal file
@ -0,0 +1,11 @@
|
||||
provider: claude
|
||||
language: en
|
||||
log_level: info
|
||||
default_piece: default
|
||||
notification_sound: true
|
||||
notification_sound_events:
|
||||
iteration_limit: false
|
||||
piece_complete: false
|
||||
piece_abort: false
|
||||
run_complete: true
|
||||
run_abort: true
|
||||
5
e2e/fixtures/pieces/broken.yaml
Normal file
5
e2e/fixtures/pieces/broken.yaml
Normal file
@ -0,0 +1,5 @@
|
||||
name: broken
|
||||
this is not valid YAML
|
||||
- indentation: [wrong
|
||||
movements:
|
||||
broken: {{{
|
||||
27
e2e/fixtures/pieces/mock-max-iter.yaml
Normal file
27
e2e/fixtures/pieces/mock-max-iter.yaml
Normal file
@ -0,0 +1,27 @@
|
||||
name: e2e-mock-max-iter
|
||||
description: Piece with max_movements=2 that loops between two steps
|
||||
|
||||
max_movements: 2
|
||||
|
||||
initial_movement: step-a
|
||||
|
||||
movements:
|
||||
- name: step-a
|
||||
edit: true
|
||||
persona: ../agents/test-coder.md
|
||||
permission_mode: edit
|
||||
instruction_template: |
|
||||
{task}
|
||||
rules:
|
||||
- condition: Done
|
||||
next: step-b
|
||||
|
||||
- name: step-b
|
||||
edit: true
|
||||
persona: ../agents/test-coder.md
|
||||
permission_mode: edit
|
||||
instruction_template: |
|
||||
Continue the task.
|
||||
rules:
|
||||
- condition: Done
|
||||
next: step-a
|
||||
15
e2e/fixtures/pieces/mock-no-match.yaml
Normal file
15
e2e/fixtures/pieces/mock-no-match.yaml
Normal file
@ -0,0 +1,15 @@
|
||||
name: e2e-mock-no-match
|
||||
description: Piece with a strict rule condition that will not match mock output
|
||||
|
||||
max_movements: 3
|
||||
|
||||
movements:
|
||||
- name: execute
|
||||
edit: true
|
||||
persona: ../agents/test-coder.md
|
||||
permission_mode: edit
|
||||
instruction_template: |
|
||||
{task}
|
||||
rules:
|
||||
- condition: SpecificMatchThatWillNotOccur
|
||||
next: COMPLETE
|
||||
@ -1,7 +1,7 @@
|
||||
name: e2e-mock-single
|
||||
description: Minimal mock-only piece for CLI E2E
|
||||
|
||||
max_iterations: 3
|
||||
max_movements: 3
|
||||
|
||||
movements:
|
||||
- name: execute
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
name: e2e-mock-slow-multi-step
|
||||
description: Multi-step mock piece to keep tasks in-flight long enough for SIGINT E2E
|
||||
|
||||
max_iterations: 20
|
||||
max_movements: 20
|
||||
|
||||
initial_movement: step-1
|
||||
|
||||
|
||||
27
e2e/fixtures/pieces/mock-two-step.yaml
Normal file
27
e2e/fixtures/pieces/mock-two-step.yaml
Normal file
@ -0,0 +1,27 @@
|
||||
name: e2e-mock-two-step
|
||||
description: Two-step sequential piece for E2E testing
|
||||
|
||||
max_movements: 5
|
||||
|
||||
initial_movement: step-1
|
||||
|
||||
movements:
|
||||
- name: step-1
|
||||
edit: true
|
||||
persona: ../agents/test-coder.md
|
||||
permission_mode: edit
|
||||
instruction_template: |
|
||||
{task}
|
||||
rules:
|
||||
- condition: Done
|
||||
next: step-2
|
||||
|
||||
- name: step-2
|
||||
edit: true
|
||||
persona: ../agents/test-coder.md
|
||||
permission_mode: edit
|
||||
instruction_template: |
|
||||
Continue the task.
|
||||
rules:
|
||||
- condition: Done
|
||||
next: COMPLETE
|
||||
@ -1,7 +1,7 @@
|
||||
name: e2e-multi-step-parallel
|
||||
description: Multi-step piece with parallel sub-movements for E2E testing
|
||||
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
|
||||
initial_movement: plan
|
||||
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
name: e2e-report-judge
|
||||
description: E2E piece that exercises report + judge phases
|
||||
|
||||
max_iterations: 3
|
||||
max_movements: 3
|
||||
|
||||
movements:
|
||||
- name: execute
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
name: e2e-simple
|
||||
description: Minimal E2E test piece
|
||||
|
||||
max_iterations: 5
|
||||
max_movements: 5
|
||||
|
||||
movements:
|
||||
- name: execute
|
||||
|
||||
18
e2e/fixtures/scenarios/max-iter-loop.json
Normal file
18
e2e/fixtures/scenarios/max-iter-loop.json
Normal file
@ -0,0 +1,18 @@
|
||||
[
|
||||
{
|
||||
"status": "done",
|
||||
"content": "Step A output."
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"content": "Step B output."
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"content": "Step A output again."
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"content": "Step B output again."
|
||||
}
|
||||
]
|
||||
6
e2e/fixtures/scenarios/no-match.json
Normal file
6
e2e/fixtures/scenarios/no-match.json
Normal file
@ -0,0 +1,6 @@
|
||||
[
|
||||
{
|
||||
"status": "error",
|
||||
"content": "Simulated failure: API error during execution"
|
||||
}
|
||||
]
|
||||
6
e2e/fixtures/scenarios/one-entry-only.json
Normal file
6
e2e/fixtures/scenarios/one-entry-only.json
Normal file
@ -0,0 +1,6 @@
|
||||
[
|
||||
{
|
||||
"status": "done",
|
||||
"content": "Only entry in scenario."
|
||||
}
|
||||
]
|
||||
14
e2e/fixtures/scenarios/run-three-tasks.json
Normal file
14
e2e/fixtures/scenarios/run-three-tasks.json
Normal file
@ -0,0 +1,14 @@
|
||||
[
|
||||
{
|
||||
"status": "done",
|
||||
"content": "Task 1 completed successfully."
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"content": "Task 2 completed successfully."
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"content": "Task 3 completed successfully."
|
||||
}
|
||||
]
|
||||
14
e2e/fixtures/scenarios/run-with-failure.json
Normal file
14
e2e/fixtures/scenarios/run-with-failure.json
Normal file
@ -0,0 +1,14 @@
|
||||
[
|
||||
{
|
||||
"status": "done",
|
||||
"content": "Task 1 completed successfully."
|
||||
},
|
||||
{
|
||||
"status": "error",
|
||||
"content": "Task 2 encountered an error."
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"content": "Task 3 completed successfully."
|
||||
}
|
||||
]
|
||||
10
e2e/fixtures/scenarios/two-step-done.json
Normal file
10
e2e/fixtures/scenarios/two-step-done.json
Normal file
@ -0,0 +1,10 @@
|
||||
[
|
||||
{
|
||||
"status": "done",
|
||||
"content": "Step 1 output text completed."
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"content": "Step 2 output text completed."
|
||||
}
|
||||
]
|
||||
@ -1,6 +1,8 @@
|
||||
import { mkdtempSync, mkdirSync, writeFileSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { mkdtempSync, mkdirSync, readFileSync, writeFileSync, rmSync } from 'node:fs';
|
||||
import { dirname, join, resolve } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { parse as parseYaml, stringify as stringifyYaml } from 'yaml';
|
||||
|
||||
export interface IsolatedEnv {
|
||||
runId: string;
|
||||
@ -9,6 +11,73 @@ export interface IsolatedEnv {
|
||||
cleanup: () => void;
|
||||
}
|
||||
|
||||
type E2EConfig = Record<string, unknown>;
|
||||
type NotificationSoundEvents = Record<string, unknown>;
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
const E2E_CONFIG_FIXTURE_PATH = resolve(__dirname, '../fixtures/config.e2e.yaml');
|
||||
|
||||
function readE2EFixtureConfig(): E2EConfig {
|
||||
const raw = readFileSync(E2E_CONFIG_FIXTURE_PATH, 'utf-8');
|
||||
const parsed = parseYaml(raw);
|
||||
if (!parsed || typeof parsed !== 'object' || Array.isArray(parsed)) {
|
||||
throw new Error(`Invalid E2E config fixture: ${E2E_CONFIG_FIXTURE_PATH}`);
|
||||
}
|
||||
return parsed as E2EConfig;
|
||||
}
|
||||
|
||||
function writeConfigFile(taktDir: string, config: E2EConfig): void {
|
||||
writeFileSync(join(taktDir, 'config.yaml'), `${stringifyYaml(config)}`);
|
||||
}
|
||||
|
||||
function parseNotificationSoundEvents(
|
||||
source: E2EConfig,
|
||||
sourceName: string,
|
||||
): NotificationSoundEvents | undefined {
|
||||
const value = source.notification_sound_events;
|
||||
if (value === undefined) {
|
||||
return undefined;
|
||||
}
|
||||
if (!value || typeof value !== 'object' || Array.isArray(value)) {
|
||||
throw new Error(
|
||||
`Invalid notification_sound_events in ${sourceName}: expected object`,
|
||||
);
|
||||
}
|
||||
return value as NotificationSoundEvents;
|
||||
}
|
||||
|
||||
function mergeIsolatedConfig(
|
||||
fixture: E2EConfig,
|
||||
current: E2EConfig,
|
||||
patch: E2EConfig,
|
||||
): E2EConfig {
|
||||
const merged: E2EConfig = { ...fixture, ...current, ...patch };
|
||||
const fixtureEvents = parseNotificationSoundEvents(fixture, 'fixture');
|
||||
const currentEvents = parseNotificationSoundEvents(current, 'current config');
|
||||
const patchEvents = parseNotificationSoundEvents(patch, 'patch');
|
||||
if (!fixtureEvents && !currentEvents && !patchEvents) {
|
||||
return merged;
|
||||
}
|
||||
merged.notification_sound_events = {
|
||||
...(fixtureEvents ?? {}),
|
||||
...(currentEvents ?? {}),
|
||||
...(patchEvents ?? {}),
|
||||
};
|
||||
return merged;
|
||||
}
|
||||
|
||||
export function updateIsolatedConfig(taktDir: string, patch: E2EConfig): void {
|
||||
const current = readE2EFixtureConfig();
|
||||
const configPath = join(taktDir, 'config.yaml');
|
||||
const raw = readFileSync(configPath, 'utf-8');
|
||||
const parsed = parseYaml(raw);
|
||||
if (!parsed || typeof parsed !== 'object' || Array.isArray(parsed)) {
|
||||
throw new Error(`Invalid isolated config: ${configPath}`);
|
||||
}
|
||||
writeConfigFile(taktDir, mergeIsolatedConfig(current, parsed as E2EConfig, patch));
|
||||
}
|
||||
|
||||
/**
|
||||
* Create an isolated environment for E2E testing.
|
||||
*
|
||||
@ -24,18 +93,21 @@ export function createIsolatedEnv(): IsolatedEnv {
|
||||
const gitConfigPath = join(baseDir, '.gitconfig');
|
||||
|
||||
// Create TAKT config directory and config.yaml
|
||||
// Use TAKT_E2E_PROVIDER to match config provider with the actual provider being tested
|
||||
const configProvider = process.env.TAKT_E2E_PROVIDER ?? 'claude';
|
||||
mkdirSync(taktDir, { recursive: true });
|
||||
writeFileSync(
|
||||
join(taktDir, 'config.yaml'),
|
||||
[
|
||||
`provider: ${configProvider}`,
|
||||
'language: en',
|
||||
'log_level: info',
|
||||
'default_piece: default',
|
||||
].join('\n'),
|
||||
);
|
||||
const baseConfig = readE2EFixtureConfig();
|
||||
const provider = process.env.TAKT_E2E_PROVIDER;
|
||||
const model = process.env.TAKT_E2E_MODEL;
|
||||
if (provider === 'opencode' && !model) {
|
||||
throw new Error('TAKT_E2E_PROVIDER=opencode requires TAKT_E2E_MODEL (e.g. opencode/big-pickle)');
|
||||
}
|
||||
const config = provider
|
||||
? {
|
||||
...baseConfig,
|
||||
provider,
|
||||
...(provider === 'opencode' && model ? { model } : {}),
|
||||
}
|
||||
: baseConfig;
|
||||
writeConfigFile(taktDir, config);
|
||||
|
||||
// Create isolated Git config file
|
||||
writeFileSync(
|
||||
@ -58,11 +130,7 @@ export function createIsolatedEnv(): IsolatedEnv {
|
||||
taktDir,
|
||||
env,
|
||||
cleanup: () => {
|
||||
try {
|
||||
rmSync(baseDir, { recursive: true, force: true });
|
||||
} catch {
|
||||
// Best-effort cleanup; ignore errors (e.g., already deleted)
|
||||
}
|
||||
rmSync(baseDir, { recursive: true, force: true });
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
@ -74,10 +74,10 @@ describe('E2E: Add task and run (takt add → takt run)', () => {
|
||||
const readme = readFileSync(readmePath, 'utf-8');
|
||||
expect(readme).toContain('E2E test passed');
|
||||
|
||||
// Verify task status became completed
|
||||
// Verify completed task was removed from tasks.yaml
|
||||
const tasksRaw = readFileSync(tasksFile, 'utf-8');
|
||||
const parsed = parseYaml(tasksRaw) as { tasks?: Array<{ name?: string; status?: string }> };
|
||||
const executed = parsed.tasks?.find((task) => task.name === 'e2e-test-task');
|
||||
expect(executed?.status).toBe('completed');
|
||||
expect(executed).toBeUndefined();
|
||||
}, 240_000);
|
||||
});
|
||||
|
||||
@ -1,10 +1,14 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import { readFileSync, writeFileSync } from 'node:fs';
|
||||
import { readFileSync, existsSync } from 'node:fs';
|
||||
import { join, dirname, resolve } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { parse as parseYaml } from 'yaml';
|
||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||
import {
|
||||
createIsolatedEnv,
|
||||
updateIsolatedConfig,
|
||||
type IsolatedEnv,
|
||||
} from '../helpers/isolated-env';
|
||||
import { createTestRepo, type TestRepo } from '../helpers/test-repo';
|
||||
import { runTakt } from '../helpers/takt-runner';
|
||||
|
||||
@ -22,16 +26,10 @@ describe('E2E: Add task from GitHub issue (takt add)', () => {
|
||||
testRepo = createTestRepo();
|
||||
|
||||
// Use mock provider to stabilize summarizer
|
||||
writeFileSync(
|
||||
join(isolatedEnv.taktDir, 'config.yaml'),
|
||||
[
|
||||
'provider: mock',
|
||||
'model: mock-model',
|
||||
'language: en',
|
||||
'log_level: info',
|
||||
'default_piece: default',
|
||||
].join('\n'),
|
||||
);
|
||||
updateIsolatedConfig(isolatedEnv.taktDir, {
|
||||
provider: 'mock',
|
||||
model: 'mock-model',
|
||||
});
|
||||
|
||||
const createOutput = execFileSync(
|
||||
'gh',
|
||||
@ -87,8 +85,12 @@ describe('E2E: Add task from GitHub issue (takt add)', () => {
|
||||
|
||||
const tasksFile = join(testRepo.path, '.takt', 'tasks.yaml');
|
||||
const content = readFileSync(tasksFile, 'utf-8');
|
||||
const parsed = parseYaml(content) as { tasks?: Array<{ issue?: number }> };
|
||||
const parsed = parseYaml(content) as { tasks?: Array<{ issue?: number; task_dir?: string }> };
|
||||
expect(parsed.tasks?.length).toBe(1);
|
||||
expect(parsed.tasks?.[0]?.issue).toBe(Number(issueNumber));
|
||||
expect(parsed.tasks?.[0]?.task_dir).toBeTypeOf('string');
|
||||
const orderPath = join(testRepo.path, String(parsed.tasks?.[0]?.task_dir), 'order.md');
|
||||
expect(existsSync(orderPath)).toBe(true);
|
||||
expect(readFileSync(orderPath, 'utf-8')).toContain('E2E Add Issue');
|
||||
}, 240_000);
|
||||
});
|
||||
|
||||
85
e2e/specs/cli-catalog.e2e.ts
Normal file
85
e2e/specs/cli-catalog.e2e.ts
Normal file
@ -0,0 +1,85 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||
import { runTakt } from '../helpers/takt-runner';
|
||||
|
||||
function createLocalRepo(): { path: string; cleanup: () => void } {
|
||||
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-catalog-'));
|
||||
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
|
||||
writeFileSync(join(repoPath, 'README.md'), '# test\n');
|
||||
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
return {
|
||||
path: repoPath,
|
||||
cleanup: () => {
|
||||
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// E2E更新時は docs/testing/e2e.md も更新すること
|
||||
describe('E2E: Catalog command (takt catalog)', () => {
|
||||
let isolatedEnv: IsolatedEnv;
|
||||
let repo: { path: string; cleanup: () => void };
|
||||
|
||||
beforeEach(() => {
|
||||
isolatedEnv = createIsolatedEnv();
|
||||
repo = createLocalRepo();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try { repo.cleanup(); } catch { /* best-effort */ }
|
||||
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
|
||||
});
|
||||
|
||||
it('should list all facet types when no argument given', () => {
|
||||
// Given: a local repo with isolated env
|
||||
|
||||
// When: running takt catalog
|
||||
const result = runTakt({
|
||||
args: ['catalog'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Then: output contains facet type sections
|
||||
expect(result.exitCode).toBe(0);
|
||||
const output = result.stdout.toLowerCase();
|
||||
expect(output).toMatch(/persona/);
|
||||
});
|
||||
|
||||
it('should list facets for a specific type', () => {
|
||||
// Given: a local repo with isolated env
|
||||
|
||||
// When: running takt catalog personas
|
||||
const result = runTakt({
|
||||
args: ['catalog', 'personas'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Then: output contains persona names
|
||||
expect(result.exitCode).toBe(0);
|
||||
expect(result.stdout).toMatch(/coder/i);
|
||||
});
|
||||
|
||||
it('should error for an invalid facet type', () => {
|
||||
// Given: a local repo with isolated env
|
||||
|
||||
// When: running takt catalog with an invalid type
|
||||
const result = runTakt({
|
||||
args: ['catalog', 'invalidtype'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Then: output contains an error or lists valid types
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toMatch(/invalid|not found|valid types|unknown/i);
|
||||
});
|
||||
});
|
||||
55
e2e/specs/cli-clear.e2e.ts
Normal file
55
e2e/specs/cli-clear.e2e.ts
Normal file
@ -0,0 +1,55 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||
import { runTakt } from '../helpers/takt-runner';
|
||||
|
||||
function createLocalRepo(): { path: string; cleanup: () => void } {
|
||||
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-clear-'));
|
||||
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
|
||||
writeFileSync(join(repoPath, 'README.md'), '# test\n');
|
||||
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
return {
|
||||
path: repoPath,
|
||||
cleanup: () => {
|
||||
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// E2E更新時は docs/testing/e2e.md も更新すること
|
||||
describe('E2E: Clear sessions command (takt clear)', () => {
|
||||
let isolatedEnv: IsolatedEnv;
|
||||
let repo: { path: string; cleanup: () => void };
|
||||
|
||||
beforeEach(() => {
|
||||
isolatedEnv = createIsolatedEnv();
|
||||
repo = createLocalRepo();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try { repo.cleanup(); } catch { /* best-effort */ }
|
||||
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
|
||||
});
|
||||
|
||||
it('should clear sessions without error', () => {
|
||||
// Given: a local repo with isolated env
|
||||
|
||||
// When: running takt clear
|
||||
const result = runTakt({
|
||||
args: ['clear'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Then: exits cleanly
|
||||
expect(result.exitCode).toBe(0);
|
||||
const output = result.stdout.toLowerCase();
|
||||
expect(output).toMatch(/clear|session|removed|no session/);
|
||||
});
|
||||
});
|
||||
102
e2e/specs/cli-config.e2e.ts
Normal file
102
e2e/specs/cli-config.e2e.ts
Normal file
@ -0,0 +1,102 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { mkdtempSync, writeFileSync, readFileSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||
import { runTakt } from '../helpers/takt-runner';
|
||||
|
||||
function createLocalRepo(): { path: string; cleanup: () => void } {
|
||||
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-config-'));
|
||||
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
|
||||
writeFileSync(join(repoPath, 'README.md'), '# test\n');
|
||||
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
return {
|
||||
path: repoPath,
|
||||
cleanup: () => {
|
||||
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// E2E更新時は docs/testing/e2e.md も更新すること
|
||||
describe('E2E: Config command (takt config)', () => {
|
||||
let isolatedEnv: IsolatedEnv;
|
||||
let repo: { path: string; cleanup: () => void };
|
||||
|
||||
beforeEach(() => {
|
||||
isolatedEnv = createIsolatedEnv();
|
||||
repo = createLocalRepo();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try { repo.cleanup(); } catch { /* best-effort */ }
|
||||
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
|
||||
});
|
||||
|
||||
it('should switch to default mode with explicit argument', () => {
|
||||
// Given: a local repo with isolated env
|
||||
|
||||
// When: running takt config default
|
||||
const result = runTakt({
|
||||
args: ['config', 'default'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Then: exits successfully and outputs switched message
|
||||
expect(result.exitCode).toBe(0);
|
||||
const output = result.stdout;
|
||||
expect(output).toMatch(/Switched to: default/);
|
||||
});
|
||||
|
||||
it('should switch to sacrifice-my-pc mode with explicit argument', () => {
|
||||
// Given: a local repo with isolated env
|
||||
|
||||
// When: running takt config sacrifice-my-pc
|
||||
const result = runTakt({
|
||||
args: ['config', 'sacrifice-my-pc'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Then: exits successfully and outputs switched message
|
||||
expect(result.exitCode).toBe(0);
|
||||
const output = result.stdout;
|
||||
expect(output).toMatch(/Switched to: sacrifice-my-pc/);
|
||||
});
|
||||
|
||||
it('should persist permission mode to project config', () => {
|
||||
// Given: a local repo with isolated env
|
||||
|
||||
// When: running takt config sacrifice-my-pc
|
||||
runTakt({
|
||||
args: ['config', 'sacrifice-my-pc'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Then: .takt/config.yaml contains permissionMode: sacrifice-my-pc
|
||||
const configPath = join(repo.path, '.takt', 'config.yaml');
|
||||
const content = readFileSync(configPath, 'utf-8');
|
||||
expect(content).toMatch(/permissionMode:\s*sacrifice-my-pc/);
|
||||
});
|
||||
|
||||
it('should report error for invalid mode name', () => {
|
||||
// Given: a local repo with isolated env
|
||||
|
||||
// When: running takt config with an invalid mode
|
||||
const result = runTakt({
|
||||
args: ['config', 'invalid-mode'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Then: output contains invalid mode message
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toMatch(/Invalid mode/);
|
||||
});
|
||||
});
|
||||
88
e2e/specs/cli-export-cc.e2e.ts
Normal file
88
e2e/specs/cli-export-cc.e2e.ts
Normal file
@ -0,0 +1,88 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { mkdtempSync, writeFileSync, existsSync, readdirSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||
import { runTakt } from '../helpers/takt-runner';
|
||||
|
||||
function createLocalRepo(): { path: string; cleanup: () => void } {
|
||||
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-export-cc-'));
|
||||
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
|
||||
writeFileSync(join(repoPath, 'README.md'), '# test\n');
|
||||
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
return {
|
||||
path: repoPath,
|
||||
cleanup: () => {
|
||||
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// E2E更新時は docs/testing/e2e.md も更新すること
|
||||
describe('E2E: Export-cc command (takt export-cc)', () => {
|
||||
let isolatedEnv: IsolatedEnv;
|
||||
let repo: { path: string; cleanup: () => void };
|
||||
let fakeHome: string;
|
||||
|
||||
beforeEach(() => {
|
||||
isolatedEnv = createIsolatedEnv();
|
||||
repo = createLocalRepo();
|
||||
fakeHome = mkdtempSync(join(tmpdir(), 'takt-e2e-export-cc-home-'));
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try { repo.cleanup(); } catch { /* best-effort */ }
|
||||
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
|
||||
try { rmSync(fakeHome, { recursive: true, force: true }); } catch { /* best-effort */ }
|
||||
});
|
||||
|
||||
it('should deploy skill files to isolated home directory', () => {
|
||||
// Given: a local repo with isolated env and HOME redirected to fakeHome
|
||||
const env: NodeJS.ProcessEnv = { ...isolatedEnv.env, HOME: fakeHome };
|
||||
|
||||
// When: running takt export-cc
|
||||
const result = runTakt({
|
||||
args: ['export-cc'],
|
||||
cwd: repo.path,
|
||||
env,
|
||||
});
|
||||
|
||||
// Then: exits successfully and outputs deploy message
|
||||
expect(result.exitCode).toBe(0);
|
||||
const output = result.stdout;
|
||||
expect(output).toMatch(/ファイルをデプロイしました/);
|
||||
|
||||
// Then: SKILL.md exists in the skill directory
|
||||
const skillMdPath = join(fakeHome, '.claude', 'skills', 'takt', 'SKILL.md');
|
||||
expect(existsSync(skillMdPath)).toBe(true);
|
||||
});
|
||||
|
||||
it('should deploy resource directories', () => {
|
||||
// Given: a local repo with isolated env and HOME redirected to fakeHome
|
||||
const env: NodeJS.ProcessEnv = { ...isolatedEnv.env, HOME: fakeHome };
|
||||
|
||||
// When: running takt export-cc
|
||||
runTakt({
|
||||
args: ['export-cc'],
|
||||
cwd: repo.path,
|
||||
env,
|
||||
});
|
||||
|
||||
// Then: pieces/ and personas/ directories exist with at least one file each
|
||||
const skillDir = join(fakeHome, '.claude', 'skills', 'takt');
|
||||
|
||||
const piecesDir = join(skillDir, 'pieces');
|
||||
expect(existsSync(piecesDir)).toBe(true);
|
||||
const pieceFiles = readdirSync(piecesDir);
|
||||
expect(pieceFiles.length).toBeGreaterThan(0);
|
||||
|
||||
const personasDir = join(skillDir, 'personas');
|
||||
expect(existsSync(personasDir)).toBe(true);
|
||||
const personaFiles = readdirSync(personasDir);
|
||||
expect(personaFiles.length).toBeGreaterThan(0);
|
||||
});
|
||||
});
|
||||
73
e2e/specs/cli-help.e2e.ts
Normal file
73
e2e/specs/cli-help.e2e.ts
Normal file
@ -0,0 +1,73 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||
import { runTakt } from '../helpers/takt-runner';
|
||||
|
||||
function createLocalRepo(): { path: string; cleanup: () => void } {
|
||||
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-help-'));
|
||||
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
|
||||
writeFileSync(join(repoPath, 'README.md'), '# test\n');
|
||||
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
return {
|
||||
path: repoPath,
|
||||
cleanup: () => {
|
||||
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// E2E更新時は docs/testing/e2e.md も更新すること
|
||||
describe('E2E: Help command (takt --help)', () => {
|
||||
let isolatedEnv: IsolatedEnv;
|
||||
let repo: { path: string; cleanup: () => void };
|
||||
|
||||
beforeEach(() => {
|
||||
isolatedEnv = createIsolatedEnv();
|
||||
repo = createLocalRepo();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try { repo.cleanup(); } catch { /* best-effort */ }
|
||||
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
|
||||
});
|
||||
|
||||
it('should display subcommand list with --help', () => {
|
||||
// Given: a local repo with isolated env
|
||||
|
||||
// When: running takt --help
|
||||
const result = runTakt({
|
||||
args: ['--help'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Then: output lists subcommands
|
||||
expect(result.exitCode).toBe(0);
|
||||
expect(result.stdout).toMatch(/run/);
|
||||
expect(result.stdout).toMatch(/add/);
|
||||
expect(result.stdout).toMatch(/list/);
|
||||
expect(result.stdout).toMatch(/eject/);
|
||||
});
|
||||
|
||||
it('should display run subcommand help with takt run --help', () => {
|
||||
// Given: a local repo with isolated env
|
||||
|
||||
// When: running takt run --help
|
||||
const result = runTakt({
|
||||
args: ['run', '--help'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Then: output contains run command description
|
||||
expect(result.exitCode).toBe(0);
|
||||
const output = result.stdout.toLowerCase();
|
||||
expect(output).toMatch(/run|task|pending/);
|
||||
});
|
||||
});
|
||||
76
e2e/specs/cli-prompt.e2e.ts
Normal file
76
e2e/specs/cli-prompt.e2e.ts
Normal file
@ -0,0 +1,76 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { resolve, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||
import { runTakt } from '../helpers/takt-runner';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
function createLocalRepo(): { path: string; cleanup: () => void } {
|
||||
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-prompt-'));
|
||||
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
|
||||
writeFileSync(join(repoPath, 'README.md'), '# test\n');
|
||||
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
return {
|
||||
path: repoPath,
|
||||
cleanup: () => {
|
||||
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// E2E更新時は docs/testing/e2e.md も更新すること
|
||||
describe('E2E: Prompt preview command (takt prompt)', () => {
|
||||
let isolatedEnv: IsolatedEnv;
|
||||
let repo: { path: string; cleanup: () => void };
|
||||
|
||||
beforeEach(() => {
|
||||
isolatedEnv = createIsolatedEnv();
|
||||
repo = createLocalRepo();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try { repo.cleanup(); } catch { /* best-effort */ }
|
||||
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
|
||||
});
|
||||
|
||||
it('should output prompt preview header and movement info for a piece', () => {
|
||||
// Given: a piece file path
|
||||
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
|
||||
|
||||
// When: running takt prompt with piece path
|
||||
const result = runTakt({
|
||||
args: ['prompt', piecePath],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Then: output contains "Prompt Preview" header and movement info
|
||||
// (may fail on Phase 3 for pieces with tag-based rules, but header is still output)
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toMatch(/Prompt Preview|Movement 1/i);
|
||||
});
|
||||
|
||||
it('should report not found for a nonexistent piece name', () => {
|
||||
// Given: a nonexistent piece name
|
||||
|
||||
// When: running takt prompt with invalid piece
|
||||
const result = runTakt({
|
||||
args: ['prompt', 'nonexistent-piece-xyz'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Then: reports piece not found
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toMatch(/not found/i);
|
||||
});
|
||||
});
|
||||
61
e2e/specs/cli-reset-categories.e2e.ts
Normal file
61
e2e/specs/cli-reset-categories.e2e.ts
Normal file
@ -0,0 +1,61 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { mkdtempSync, writeFileSync, readFileSync, existsSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||
import { runTakt } from '../helpers/takt-runner';
|
||||
|
||||
function createLocalRepo(): { path: string; cleanup: () => void } {
|
||||
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-reset-'));
|
||||
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
|
||||
writeFileSync(join(repoPath, 'README.md'), '# test\n');
|
||||
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
return {
|
||||
path: repoPath,
|
||||
cleanup: () => {
|
||||
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// E2E更新時は docs/testing/e2e.md も更新すること
|
||||
describe('E2E: Reset categories command (takt reset categories)', () => {
|
||||
let isolatedEnv: IsolatedEnv;
|
||||
let repo: { path: string; cleanup: () => void };
|
||||
|
||||
beforeEach(() => {
|
||||
isolatedEnv = createIsolatedEnv();
|
||||
repo = createLocalRepo();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try { repo.cleanup(); } catch { /* best-effort */ }
|
||||
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
|
||||
});
|
||||
|
||||
it('should reset categories and create overlay file', () => {
|
||||
// Given: a local repo with isolated env
|
||||
|
||||
// When: running takt reset categories
|
||||
const result = runTakt({
|
||||
args: ['reset', 'categories'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Then: exits successfully and outputs reset message
|
||||
expect(result.exitCode).toBe(0);
|
||||
const output = result.stdout;
|
||||
expect(output).toMatch(/reset/i);
|
||||
|
||||
// Then: piece-categories.yaml exists with initial content
|
||||
const categoriesPath = join(isolatedEnv.taktDir, 'preferences', 'piece-categories.yaml');
|
||||
expect(existsSync(categoriesPath)).toBe(true);
|
||||
const content = readFileSync(categoriesPath, 'utf-8');
|
||||
expect(content).toContain('piece_categories: {}');
|
||||
});
|
||||
});
|
||||
70
e2e/specs/cli-switch.e2e.ts
Normal file
70
e2e/specs/cli-switch.e2e.ts
Normal file
@ -0,0 +1,70 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||
import { runTakt } from '../helpers/takt-runner';
|
||||
|
||||
function createLocalRepo(): { path: string; cleanup: () => void } {
|
||||
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-switch-'));
|
||||
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
|
||||
writeFileSync(join(repoPath, 'README.md'), '# test\n');
|
||||
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
return {
|
||||
path: repoPath,
|
||||
cleanup: () => {
|
||||
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// E2E更新時は docs/testing/e2e.md も更新すること
|
||||
describe('E2E: Switch piece command (takt switch)', () => {
|
||||
let isolatedEnv: IsolatedEnv;
|
||||
let repo: { path: string; cleanup: () => void };
|
||||
|
||||
beforeEach(() => {
|
||||
isolatedEnv = createIsolatedEnv();
|
||||
repo = createLocalRepo();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try { repo.cleanup(); } catch { /* best-effort */ }
|
||||
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
|
||||
});
|
||||
|
||||
it('should switch piece when a valid piece name is given', () => {
|
||||
// Given: a local repo with isolated env
|
||||
|
||||
// When: running takt switch default
|
||||
const result = runTakt({
|
||||
args: ['switch', 'default'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Then: exits successfully
|
||||
expect(result.exitCode).toBe(0);
|
||||
const output = result.stdout.toLowerCase();
|
||||
expect(output).toMatch(/default|switched|piece/);
|
||||
});
|
||||
|
||||
it('should error when a nonexistent piece name is given', () => {
|
||||
// Given: a local repo with isolated env
|
||||
|
||||
// When: running takt switch with a nonexistent piece name
|
||||
const result = runTakt({
|
||||
args: ['switch', 'nonexistent-piece-xyz'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Then: error output
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toMatch(/not found|error|does not exist/i);
|
||||
});
|
||||
});
|
||||
157
e2e/specs/error-handling.e2e.ts
Normal file
157
e2e/specs/error-handling.e2e.ts
Normal file
@ -0,0 +1,157 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { resolve, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||
import { runTakt } from '../helpers/takt-runner';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
function createLocalRepo(): { path: string; cleanup: () => void } {
|
||||
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-error-'));
|
||||
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
|
||||
writeFileSync(join(repoPath, 'README.md'), '# test\n');
|
||||
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
return {
|
||||
path: repoPath,
|
||||
cleanup: () => {
|
||||
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// E2E更新時は docs/testing/e2e.md も更新すること
|
||||
describe('E2E: Error handling edge cases (mock)', () => {
|
||||
let isolatedEnv: IsolatedEnv;
|
||||
let repo: { path: string; cleanup: () => void };
|
||||
|
||||
beforeEach(() => {
|
||||
isolatedEnv = createIsolatedEnv();
|
||||
repo = createLocalRepo();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try { repo.cleanup(); } catch { /* best-effort */ }
|
||||
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
|
||||
});
|
||||
|
||||
it('should error when --piece points to a nonexistent file path', () => {
|
||||
// Given: a nonexistent piece file path
|
||||
|
||||
// When: running with a bad piece path
|
||||
const result = runTakt({
|
||||
args: [
|
||||
'--task', 'test',
|
||||
'--piece', '/nonexistent/path/to/piece.yaml',
|
||||
'--create-worktree', 'no',
|
||||
'--provider', 'mock',
|
||||
],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: exits with error
|
||||
expect(result.exitCode).not.toBe(0);
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toMatch(/not found|does not exist|ENOENT/i);
|
||||
}, 240_000);
|
||||
|
||||
it('should report error when --piece specifies a nonexistent piece name', () => {
|
||||
// Given: a nonexistent piece name
|
||||
|
||||
// When: running with a bad piece name
|
||||
const result = runTakt({
|
||||
args: [
|
||||
'--task', 'test',
|
||||
'--piece', 'nonexistent-piece-name-xyz',
|
||||
'--create-worktree', 'no',
|
||||
'--provider', 'mock',
|
||||
],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: output contains error about piece not found
|
||||
// Note: takt reports the error but currently exits with code 0
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toMatch(/not found/i);
|
||||
}, 240_000);
|
||||
|
||||
it('should error when --pipeline is used without --task or --issue', () => {
|
||||
// Given: pipeline mode with no task or issue
|
||||
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
|
||||
|
||||
// When: running in pipeline mode without a task
|
||||
const result = runTakt({
|
||||
args: [
|
||||
'--pipeline',
|
||||
'--piece', piecePath,
|
||||
'--skip-git',
|
||||
'--provider', 'mock',
|
||||
],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: exits with error (should not hang in interactive mode due to TAKT_NO_TTY=1)
|
||||
expect(result.exitCode).not.toBe(0);
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toMatch(/task|issue|required/i);
|
||||
}, 240_000);
|
||||
|
||||
it('should error when --create-worktree receives an invalid value', () => {
|
||||
// Given: invalid worktree value
|
||||
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
|
||||
|
||||
// When: running with invalid worktree option
|
||||
const result = runTakt({
|
||||
args: [
|
||||
'--task', 'test',
|
||||
'--piece', piecePath,
|
||||
'--create-worktree', 'invalid-value',
|
||||
'--provider', 'mock',
|
||||
],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: exits with error or warning about invalid value
|
||||
const combined = result.stdout + result.stderr;
|
||||
const hasError = result.exitCode !== 0 || combined.match(/invalid|error|must be/i);
|
||||
expect(hasError).toBeTruthy();
|
||||
}, 240_000);
|
||||
|
||||
it('should error when piece file contains invalid YAML', () => {
|
||||
// Given: a broken YAML piece file
|
||||
const brokenPiecePath = resolve(__dirname, '../fixtures/pieces/broken.yaml');
|
||||
|
||||
// When: running with the broken piece
|
||||
const result = runTakt({
|
||||
args: [
|
||||
'--task', 'test',
|
||||
'--piece', brokenPiecePath,
|
||||
'--create-worktree', 'no',
|
||||
'--provider', 'mock',
|
||||
],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: exits with error about parsing
|
||||
expect(result.exitCode).not.toBe(0);
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toMatch(/parse|invalid|error|validation/i);
|
||||
}, 240_000);
|
||||
});
|
||||
124
e2e/specs/piece-error-handling.e2e.ts
Normal file
124
e2e/specs/piece-error-handling.e2e.ts
Normal file
@ -0,0 +1,124 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { resolve, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||
import { runTakt } from '../helpers/takt-runner';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
function createLocalRepo(): { path: string; cleanup: () => void } {
|
||||
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-piece-err-'));
|
||||
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
|
||||
writeFileSync(join(repoPath, 'README.md'), '# test\n');
|
||||
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
return {
|
||||
path: repoPath,
|
||||
cleanup: () => {
|
||||
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// E2E更新時は docs/testing/e2e.md も更新すること
|
||||
describe('E2E: Piece error handling (mock)', () => {
|
||||
let isolatedEnv: IsolatedEnv;
|
||||
let repo: { path: string; cleanup: () => void };
|
||||
|
||||
beforeEach(() => {
|
||||
isolatedEnv = createIsolatedEnv();
|
||||
repo = createLocalRepo();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try { repo.cleanup(); } catch { /* best-effort */ }
|
||||
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
|
||||
});
|
||||
|
||||
it('should abort when agent returns error status', () => {
|
||||
// Given: a piece and a scenario that returns error status
|
||||
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-no-match.yaml');
|
||||
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/no-match.json');
|
||||
|
||||
// When: executing the piece
|
||||
const result = runTakt({
|
||||
args: [
|
||||
'--task', 'Test error status abort',
|
||||
'--piece', piecePath,
|
||||
'--create-worktree', 'no',
|
||||
'--provider', 'mock',
|
||||
],
|
||||
cwd: repo.path,
|
||||
env: {
|
||||
...isolatedEnv.env,
|
||||
TAKT_MOCK_SCENARIO: scenarioPath,
|
||||
},
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: piece aborts with a non-zero exit code
|
||||
expect(result.exitCode).not.toBe(0);
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toMatch(/failed|aborted|error/i);
|
||||
}, 240_000);
|
||||
|
||||
it('should abort when max_movements is reached', () => {
|
||||
// Given: a piece with max_movements=2 that loops between step-a and step-b
|
||||
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-max-iter.yaml');
|
||||
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/max-iter-loop.json');
|
||||
|
||||
// When: executing the piece
|
||||
const result = runTakt({
|
||||
args: [
|
||||
'--task', 'Test max movements',
|
||||
'--piece', piecePath,
|
||||
'--create-worktree', 'no',
|
||||
'--provider', 'mock',
|
||||
],
|
||||
cwd: repo.path,
|
||||
env: {
|
||||
...isolatedEnv.env,
|
||||
TAKT_MOCK_SCENARIO: scenarioPath,
|
||||
},
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: piece aborts due to iteration limit
|
||||
expect(result.exitCode).not.toBe(0);
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toMatch(/Max movements|iteration|aborted/i);
|
||||
}, 240_000);
|
||||
|
||||
it('should pass previous response between sequential steps', () => {
|
||||
// Given: a two-step piece and a scenario with distinct step outputs
|
||||
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-two-step.yaml');
|
||||
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/two-step-done.json');
|
||||
|
||||
// When: executing the piece
|
||||
const result = runTakt({
|
||||
args: [
|
||||
'--task', 'Test previous response passing',
|
||||
'--piece', piecePath,
|
||||
'--create-worktree', 'no',
|
||||
'--provider', 'mock',
|
||||
],
|
||||
cwd: repo.path,
|
||||
env: {
|
||||
...isolatedEnv.env,
|
||||
TAKT_MOCK_SCENARIO: scenarioPath,
|
||||
},
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: piece completes successfully (both steps execute)
|
||||
expect(result.exitCode).toBe(0);
|
||||
expect(result.stdout).toContain('Piece completed');
|
||||
}, 240_000);
|
||||
});
|
||||
131
e2e/specs/provider-error.e2e.ts
Normal file
131
e2e/specs/provider-error.e2e.ts
Normal file
@ -0,0 +1,131 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { resolve, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import {
|
||||
createIsolatedEnv,
|
||||
updateIsolatedConfig,
|
||||
type IsolatedEnv,
|
||||
} from '../helpers/isolated-env';
|
||||
import { runTakt } from '../helpers/takt-runner';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
function createLocalRepo(): { path: string; cleanup: () => void } {
|
||||
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-provider-'));
|
||||
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
|
||||
writeFileSync(join(repoPath, 'README.md'), '# test\n');
|
||||
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
return {
|
||||
path: repoPath,
|
||||
cleanup: () => {
|
||||
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// E2E更新時は docs/testing/e2e.md も更新すること
|
||||
describe('E2E: Provider error handling (mock)', () => {
|
||||
let isolatedEnv: IsolatedEnv;
|
||||
let repo: { path: string; cleanup: () => void };
|
||||
|
||||
beforeEach(() => {
|
||||
isolatedEnv = createIsolatedEnv();
|
||||
repo = createLocalRepo();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try { repo.cleanup(); } catch { /* best-effort */ }
|
||||
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
|
||||
});
|
||||
|
||||
it('should override config provider with --provider flag', () => {
|
||||
// Given: config.yaml has provider: claude, but CLI flag specifies mock
|
||||
updateIsolatedConfig(isolatedEnv.taktDir, {
|
||||
provider: 'claude',
|
||||
});
|
||||
|
||||
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
|
||||
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/execute-done.json');
|
||||
|
||||
// When: running with --provider mock
|
||||
const result = runTakt({
|
||||
args: [
|
||||
'--task', 'Test provider override',
|
||||
'--piece', piecePath,
|
||||
'--create-worktree', 'no',
|
||||
'--provider', 'mock',
|
||||
],
|
||||
cwd: repo.path,
|
||||
env: {
|
||||
...isolatedEnv.env,
|
||||
TAKT_MOCK_SCENARIO: scenarioPath,
|
||||
},
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: executes successfully with mock provider
|
||||
expect(result.exitCode).toBe(0);
|
||||
expect(result.stdout).toContain('Piece completed');
|
||||
}, 240_000);
|
||||
|
||||
it('should use default mock response when scenario entries are exhausted', () => {
|
||||
// Given: a two-step piece with only 1 scenario entry
|
||||
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-two-step.yaml');
|
||||
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/one-entry-only.json');
|
||||
|
||||
// When: executing the piece (step-2 will have no scenario entry)
|
||||
const result = runTakt({
|
||||
args: [
|
||||
'--task', 'Test scenario exhaustion',
|
||||
'--piece', piecePath,
|
||||
'--create-worktree', 'no',
|
||||
'--provider', 'mock',
|
||||
],
|
||||
cwd: repo.path,
|
||||
env: {
|
||||
...isolatedEnv.env,
|
||||
TAKT_MOCK_SCENARIO: scenarioPath,
|
||||
},
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: does not crash; either completes or aborts gracefully
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).not.toContain('UnhandledPromiseRejection');
|
||||
expect(combined).not.toContain('SIGTERM');
|
||||
}, 240_000);
|
||||
|
||||
it('should error when scenario file does not exist', () => {
|
||||
// Given: TAKT_MOCK_SCENARIO pointing to a non-existent file
|
||||
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
|
||||
|
||||
// When: executing with a bad scenario path
|
||||
const result = runTakt({
|
||||
args: [
|
||||
'--task', 'Test bad scenario',
|
||||
'--piece', piecePath,
|
||||
'--create-worktree', 'no',
|
||||
'--provider', 'mock',
|
||||
],
|
||||
cwd: repo.path,
|
||||
env: {
|
||||
...isolatedEnv.env,
|
||||
TAKT_MOCK_SCENARIO: '/nonexistent/path/scenario.json',
|
||||
},
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: exits with error and clear message
|
||||
expect(result.exitCode).not.toBe(0);
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toMatch(/[Ss]cenario file not found|ENOENT/);
|
||||
}, 240_000);
|
||||
});
|
||||
72
e2e/specs/quiet-mode.e2e.ts
Normal file
72
e2e/specs/quiet-mode.e2e.ts
Normal file
@ -0,0 +1,72 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { resolve, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { mkdtempSync, writeFileSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||
import { runTakt } from '../helpers/takt-runner';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
function createLocalRepo(): { path: string; cleanup: () => void } {
|
||||
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-quiet-'));
|
||||
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
|
||||
writeFileSync(join(repoPath, 'README.md'), '# test\n');
|
||||
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
return {
|
||||
path: repoPath,
|
||||
cleanup: () => {
|
||||
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// E2E更新時は docs/testing/e2e.md も更新すること
|
||||
describe('E2E: Quiet mode (--quiet)', () => {
|
||||
let isolatedEnv: IsolatedEnv;
|
||||
let repo: { path: string; cleanup: () => void };
|
||||
|
||||
beforeEach(() => {
|
||||
isolatedEnv = createIsolatedEnv();
|
||||
repo = createLocalRepo();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try { repo.cleanup(); } catch { /* best-effort */ }
|
||||
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
|
||||
});
|
||||
|
||||
it('should suppress AI stream output in quiet mode', () => {
|
||||
// Given: a simple piece and scenario
|
||||
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
|
||||
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/execute-done.json');
|
||||
|
||||
// When: running with --quiet flag
|
||||
const result = runTakt({
|
||||
args: [
|
||||
'--task', 'Test quiet mode',
|
||||
'--piece', piecePath,
|
||||
'--create-worktree', 'no',
|
||||
'--provider', 'mock',
|
||||
'--quiet',
|
||||
],
|
||||
cwd: repo.path,
|
||||
env: {
|
||||
...isolatedEnv.env,
|
||||
TAKT_MOCK_SCENARIO: scenarioPath,
|
||||
},
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: completes successfully; mock content should not appear in output
|
||||
expect(result.exitCode).toBe(0);
|
||||
// In quiet mode, the raw mock response text should be suppressed
|
||||
expect(result.stdout).not.toContain('Mock response for persona');
|
||||
}, 240_000);
|
||||
});
|
||||
183
e2e/specs/run-multiple-tasks.e2e.ts
Normal file
183
e2e/specs/run-multiple-tasks.e2e.ts
Normal file
@ -0,0 +1,183 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { resolve, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { mkdtempSync, mkdirSync, writeFileSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import {
|
||||
createIsolatedEnv,
|
||||
updateIsolatedConfig,
|
||||
type IsolatedEnv,
|
||||
} from '../helpers/isolated-env';
|
||||
import { runTakt } from '../helpers/takt-runner';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
function createLocalRepo(): { path: string; cleanup: () => void } {
|
||||
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-run-multi-'));
|
||||
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
|
||||
writeFileSync(join(repoPath, 'README.md'), '# test\n');
|
||||
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
return {
|
||||
path: repoPath,
|
||||
cleanup: () => {
|
||||
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// E2E更新時は docs/testing/e2e.md も更新すること
|
||||
describe('E2E: Run multiple tasks (takt run)', () => {
|
||||
let isolatedEnv: IsolatedEnv;
|
||||
let repo: { path: string; cleanup: () => void };
|
||||
|
||||
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
|
||||
|
||||
beforeEach(() => {
|
||||
isolatedEnv = createIsolatedEnv();
|
||||
repo = createLocalRepo();
|
||||
|
||||
// Override config to use mock provider
|
||||
updateIsolatedConfig(isolatedEnv.taktDir, {
|
||||
provider: 'mock',
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try { repo.cleanup(); } catch { /* best-effort */ }
|
||||
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
|
||||
});
|
||||
|
||||
it('should execute all pending tasks sequentially', () => {
|
||||
// Given: 3 pending tasks in tasks.yaml
|
||||
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/run-three-tasks.json');
|
||||
const now = new Date().toISOString();
|
||||
|
||||
mkdirSync(join(repo.path, '.takt'), { recursive: true });
|
||||
writeFileSync(
|
||||
join(repo.path, '.takt', 'tasks.yaml'),
|
||||
[
|
||||
'tasks:',
|
||||
' - name: task-1',
|
||||
' status: pending',
|
||||
' content: "E2E task 1"',
|
||||
` piece: "${piecePath}"`,
|
||||
` created_at: "${now}"`,
|
||||
' started_at: null',
|
||||
' completed_at: null',
|
||||
' - name: task-2',
|
||||
' status: pending',
|
||||
' content: "E2E task 2"',
|
||||
` piece: "${piecePath}"`,
|
||||
` created_at: "${now}"`,
|
||||
' started_at: null',
|
||||
' completed_at: null',
|
||||
' - name: task-3',
|
||||
' status: pending',
|
||||
' content: "E2E task 3"',
|
||||
` piece: "${piecePath}"`,
|
||||
` created_at: "${now}"`,
|
||||
' started_at: null',
|
||||
' completed_at: null',
|
||||
].join('\n'),
|
||||
'utf-8',
|
||||
);
|
||||
|
||||
// When: running takt run
|
||||
const result = runTakt({
|
||||
args: ['run', '--provider', 'mock'],
|
||||
cwd: repo.path,
|
||||
env: {
|
||||
...isolatedEnv.env,
|
||||
TAKT_MOCK_SCENARIO: scenarioPath,
|
||||
},
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: all 3 tasks complete
|
||||
expect(result.exitCode).toBe(0);
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toContain('task-1');
|
||||
expect(combined).toContain('task-2');
|
||||
expect(combined).toContain('task-3');
|
||||
}, 240_000);
|
||||
|
||||
it('should continue remaining tasks when one task fails', () => {
|
||||
// Given: 3 tasks where the 2nd will fail (error status)
|
||||
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/run-with-failure.json');
|
||||
const now = new Date().toISOString();
|
||||
|
||||
mkdirSync(join(repo.path, '.takt'), { recursive: true });
|
||||
writeFileSync(
|
||||
join(repo.path, '.takt', 'tasks.yaml'),
|
||||
[
|
||||
'tasks:',
|
||||
' - name: task-ok-1',
|
||||
' status: pending',
|
||||
' content: "Should succeed"',
|
||||
` piece: "${piecePath}"`,
|
||||
` created_at: "${now}"`,
|
||||
' started_at: null',
|
||||
' completed_at: null',
|
||||
' - name: task-fail',
|
||||
' status: pending',
|
||||
' content: "Should fail"',
|
||||
` piece: "${piecePath}"`,
|
||||
` created_at: "${now}"`,
|
||||
' started_at: null',
|
||||
' completed_at: null',
|
||||
' - name: task-ok-2',
|
||||
' status: pending',
|
||||
' content: "Should succeed after failure"',
|
||||
` piece: "${piecePath}"`,
|
||||
` created_at: "${now}"`,
|
||||
' started_at: null',
|
||||
' completed_at: null',
|
||||
].join('\n'),
|
||||
'utf-8',
|
||||
);
|
||||
|
||||
// When: running takt run
|
||||
const result = runTakt({
|
||||
args: ['run', '--provider', 'mock'],
|
||||
cwd: repo.path,
|
||||
env: {
|
||||
...isolatedEnv.env,
|
||||
TAKT_MOCK_SCENARIO: scenarioPath,
|
||||
},
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: exit code is non-zero (failure occurred), but task-ok-2 was still attempted
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toContain('task-ok-1');
|
||||
expect(combined).toContain('task-fail');
|
||||
expect(combined).toContain('task-ok-2');
|
||||
}, 240_000);
|
||||
|
||||
it('should exit cleanly when no pending tasks exist', () => {
|
||||
// Given: an empty tasks.yaml
|
||||
mkdirSync(join(repo.path, '.takt'), { recursive: true });
|
||||
writeFileSync(
|
||||
join(repo.path, '.takt', 'tasks.yaml'),
|
||||
'tasks: []\n',
|
||||
'utf-8',
|
||||
);
|
||||
|
||||
// When: running takt run
|
||||
const result = runTakt({
|
||||
args: ['run', '--provider', 'mock'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: exits cleanly with code 0
|
||||
expect(result.exitCode).toBe(0);
|
||||
}, 240_000);
|
||||
});
|
||||
@ -3,7 +3,11 @@ import { spawn } from 'node:child_process';
|
||||
import { mkdirSync, writeFileSync, readFileSync } from 'node:fs';
|
||||
import { join, resolve, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
|
||||
import {
|
||||
createIsolatedEnv,
|
||||
updateIsolatedConfig,
|
||||
type IsolatedEnv,
|
||||
} from '../helpers/isolated-env';
|
||||
import { createTestRepo, type TestRepo } from '../helpers/test-repo';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
@ -50,18 +54,12 @@ describe('E2E: Run tasks graceful shutdown on SIGINT (parallel)', () => {
|
||||
isolatedEnv = createIsolatedEnv();
|
||||
testRepo = createTestRepo();
|
||||
|
||||
writeFileSync(
|
||||
join(isolatedEnv.taktDir, 'config.yaml'),
|
||||
[
|
||||
'provider: mock',
|
||||
'model: mock-model',
|
||||
'language: en',
|
||||
'log_level: info',
|
||||
'default_piece: default',
|
||||
'concurrency: 2',
|
||||
'task_poll_interval_ms: 100',
|
||||
].join('\n'),
|
||||
);
|
||||
updateIsolatedConfig(isolatedEnv.taktDir, {
|
||||
provider: 'mock',
|
||||
model: 'mock-model',
|
||||
concurrency: 2,
|
||||
task_poll_interval_ms: 100,
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
|
||||
134
e2e/specs/task-content-file.e2e.ts
Normal file
134
e2e/specs/task-content-file.e2e.ts
Normal file
@ -0,0 +1,134 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { resolve, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { mkdtempSync, mkdirSync, writeFileSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import {
|
||||
createIsolatedEnv,
|
||||
updateIsolatedConfig,
|
||||
type IsolatedEnv,
|
||||
} from '../helpers/isolated-env';
|
||||
import { runTakt } from '../helpers/takt-runner';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
function createLocalRepo(): { path: string; cleanup: () => void } {
|
||||
const repoPath = mkdtempSync(join(tmpdir(), 'takt-e2e-contentfile-'));
|
||||
execFileSync('git', ['init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['config', 'user.name', 'Test'], { cwd: repoPath, stdio: 'pipe' });
|
||||
writeFileSync(join(repoPath, 'README.md'), '# test\n');
|
||||
execFileSync('git', ['add', '.'], { cwd: repoPath, stdio: 'pipe' });
|
||||
execFileSync('git', ['commit', '-m', 'init'], { cwd: repoPath, stdio: 'pipe' });
|
||||
return {
|
||||
path: repoPath,
|
||||
cleanup: () => {
|
||||
try { rmSync(repoPath, { recursive: true, force: true }); } catch { /* best-effort */ }
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// E2E更新時は docs/testing/e2e.md も更新すること
|
||||
describe('E2E: Task content_file reference (mock)', () => {
|
||||
let isolatedEnv: IsolatedEnv;
|
||||
let repo: { path: string; cleanup: () => void };
|
||||
|
||||
const piecePath = resolve(__dirname, '../fixtures/pieces/mock-single-step.yaml');
|
||||
|
||||
beforeEach(() => {
|
||||
isolatedEnv = createIsolatedEnv();
|
||||
repo = createLocalRepo();
|
||||
|
||||
updateIsolatedConfig(isolatedEnv.taktDir, {
|
||||
provider: 'mock',
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try { repo.cleanup(); } catch { /* best-effort */ }
|
||||
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
|
||||
});
|
||||
|
||||
it('should execute task using content_file reference', () => {
|
||||
// Given: a task with content_file pointing to an existing file
|
||||
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/execute-done.json');
|
||||
const now = new Date().toISOString();
|
||||
|
||||
mkdirSync(join(repo.path, '.takt'), { recursive: true });
|
||||
|
||||
// Create the content file
|
||||
writeFileSync(
|
||||
join(repo.path, 'task-content.txt'),
|
||||
'Create a noop file for E2E testing.',
|
||||
'utf-8',
|
||||
);
|
||||
|
||||
writeFileSync(
|
||||
join(repo.path, '.takt', 'tasks.yaml'),
|
||||
[
|
||||
'tasks:',
|
||||
' - name: content-file-task',
|
||||
' status: pending',
|
||||
' content_file: "./task-content.txt"',
|
||||
` piece: "${piecePath}"`,
|
||||
` created_at: "${now}"`,
|
||||
' started_at: null',
|
||||
' completed_at: null',
|
||||
].join('\n'),
|
||||
'utf-8',
|
||||
);
|
||||
|
||||
// When: running takt run
|
||||
const result = runTakt({
|
||||
args: ['run', '--provider', 'mock'],
|
||||
cwd: repo.path,
|
||||
env: {
|
||||
...isolatedEnv.env,
|
||||
TAKT_MOCK_SCENARIO: scenarioPath,
|
||||
},
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: task executes successfully
|
||||
expect(result.exitCode).toBe(0);
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toContain('content-file-task');
|
||||
}, 240_000);
|
||||
|
||||
it('should fail when content_file references a nonexistent file', () => {
|
||||
// Given: a task with content_file pointing to a nonexistent file
|
||||
const now = new Date().toISOString();
|
||||
|
||||
mkdirSync(join(repo.path, '.takt'), { recursive: true });
|
||||
|
||||
writeFileSync(
|
||||
join(repo.path, '.takt', 'tasks.yaml'),
|
||||
[
|
||||
'tasks:',
|
||||
' - name: bad-content-file-task',
|
||||
' status: pending',
|
||||
' content_file: "./nonexistent-content.txt"',
|
||||
` piece: "${piecePath}"`,
|
||||
` created_at: "${now}"`,
|
||||
' started_at: null',
|
||||
' completed_at: null',
|
||||
].join('\n'),
|
||||
'utf-8',
|
||||
);
|
||||
|
||||
// When: running takt run
|
||||
const result = runTakt({
|
||||
args: ['run', '--provider', 'mock'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
timeout: 240_000,
|
||||
});
|
||||
|
||||
// Then: task fails with a meaningful error
|
||||
const combined = result.stdout + result.stderr;
|
||||
expect(combined).toMatch(/not found|ENOENT|missing|error/i);
|
||||
}, 240_000);
|
||||
});
|
||||
@ -96,6 +96,6 @@ describe('E2E: Watch tasks (takt watch)', () => {
|
||||
const tasksRaw = readFileSync(tasksFile, 'utf-8');
|
||||
const parsed = parseYaml(tasksRaw) as { tasks?: Array<{ name?: string; status?: string }> };
|
||||
const watchTask = parsed.tasks?.find((task) => task.name === 'watch-task');
|
||||
expect(watchTask?.status).toBe('completed');
|
||||
expect(watchTask).toBeUndefined();
|
||||
}, 240_000);
|
||||
});
|
||||
|
||||
11
package-lock.json
generated
11
package-lock.json
generated
@ -1,16 +1,17 @@
|
||||
{
|
||||
"name": "takt",
|
||||
"version": "0.11.0",
|
||||
"version": "0.12.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "takt",
|
||||
"version": "0.11.0",
|
||||
"version": "0.12.0",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@anthropic-ai/claude-agent-sdk": "^0.2.37",
|
||||
"@openai/codex-sdk": "^0.98.0",
|
||||
"@opencode-ai/sdk": "^1.1.53",
|
||||
"chalk": "^5.3.0",
|
||||
"commander": "^12.1.0",
|
||||
"update-notifier": "^7.3.1",
|
||||
@ -936,6 +937,12 @@
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@opencode-ai/sdk": {
|
||||
"version": "1.1.53",
|
||||
"resolved": "https://registry.npmjs.org/@opencode-ai/sdk/-/sdk-1.1.53.tgz",
|
||||
"integrity": "sha512-RUIVnPOP1CyyU32FrOOYuE7Ge51lOBuhaFp2NSX98ncApT7ffoNetmwzqrhOiJQgZB1KrbCHLYOCK6AZfacxag==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/@pnpm/config.env-replace": {
|
||||
"version": "1.1.0",
|
||||
"resolved": "https://registry.npmjs.org/@pnpm/config.env-replace/-/config.env-replace-1.1.0.tgz",
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "takt",
|
||||
"version": "0.11.1",
|
||||
"version": "0.12.0",
|
||||
"description": "TAKT: Task Agent Koordination Tool - AI Agent Piece Orchestration",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
@ -20,8 +20,10 @@
|
||||
"test:e2e:provider": "npm run test:e2e:provider:claude && npm run test:e2e:provider:codex",
|
||||
"test:e2e:provider:claude": "TAKT_E2E_PROVIDER=claude vitest run --config vitest.config.e2e.provider.ts --reporter=verbose",
|
||||
"test:e2e:provider:codex": "TAKT_E2E_PROVIDER=codex vitest run --config vitest.config.e2e.provider.ts --reporter=verbose",
|
||||
"test:e2e:provider:opencode": "TAKT_E2E_PROVIDER=opencode vitest run --config vitest.config.e2e.provider.ts --reporter=verbose",
|
||||
"test:e2e:claude": "npm run test:e2e:provider:claude",
|
||||
"test:e2e:codex": "npm run test:e2e:provider:codex",
|
||||
"test:e2e:opencode": "npm run test:e2e:provider:opencode",
|
||||
"lint": "eslint src/",
|
||||
"prepublishOnly": "npm run lint && npm run build && npm run test"
|
||||
},
|
||||
@ -59,6 +61,7 @@
|
||||
"dependencies": {
|
||||
"@anthropic-ai/claude-agent-sdk": "^0.2.37",
|
||||
"@openai/codex-sdk": "^0.98.0",
|
||||
"@opencode-ai/sdk": "^1.1.53",
|
||||
"chalk": "^5.3.0",
|
||||
"commander": "^12.1.0",
|
||||
"update-notifier": "^7.3.1",
|
||||
|
||||
@ -22,7 +22,7 @@ describe('StreamDisplay', () => {
|
||||
describe('progress info display', () => {
|
||||
const progressInfo: ProgressInfo = {
|
||||
iteration: 3,
|
||||
maxIterations: 10,
|
||||
maxMovements: 10,
|
||||
movementIndex: 1,
|
||||
totalMovements: 4,
|
||||
};
|
||||
@ -253,7 +253,7 @@ describe('StreamDisplay', () => {
|
||||
it('should format progress as (iteration/max) step index/total', () => {
|
||||
const progressInfo: ProgressInfo = {
|
||||
iteration: 5,
|
||||
maxIterations: 20,
|
||||
maxMovements: 20,
|
||||
movementIndex: 2,
|
||||
totalMovements: 6,
|
||||
};
|
||||
@ -267,7 +267,7 @@ describe('StreamDisplay', () => {
|
||||
it('should convert 0-indexed movementIndex to 1-indexed display', () => {
|
||||
const progressInfo: ProgressInfo = {
|
||||
iteration: 1,
|
||||
maxIterations: 10,
|
||||
maxMovements: 10,
|
||||
movementIndex: 0, // First movement (0-indexed)
|
||||
totalMovements: 4,
|
||||
};
|
||||
|
||||
@ -8,11 +8,6 @@ vi.mock('../features/interactive/index.js', () => ({
|
||||
interactiveMode: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
||||
loadGlobalConfig: vi.fn(() => ({ provider: 'claude' })),
|
||||
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/prompt/index.js', () => ({
|
||||
promptInput: vi.fn(),
|
||||
confirm: vi.fn(),
|
||||
@ -23,6 +18,7 @@ vi.mock('../shared/ui/index.js', () => ({
|
||||
info: vi.fn(),
|
||||
blankLine: vi.fn(),
|
||||
error: vi.fn(),
|
||||
withProgress: vi.fn(async (_start, _done, operation) => operation()),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
||||
@ -38,15 +34,6 @@ vi.mock('../features/tasks/execute/selectAndExecute.js', () => ({
|
||||
determinePiece: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../infra/config/loaders/pieceResolver.js', () => ({
|
||||
getPieceDescription: vi.fn(() => ({
|
||||
name: 'default',
|
||||
description: '',
|
||||
pieceStructure: '1. implement\n2. review',
|
||||
movementPreviews: [],
|
||||
})),
|
||||
}));
|
||||
|
||||
vi.mock('../infra/github/issue.js', () => ({
|
||||
isIssueReference: vi.fn((s: string) => /^#\d+$/.test(s)),
|
||||
resolveIssueTask: vi.fn(),
|
||||
@ -65,15 +52,17 @@ vi.mock('../infra/github/issue.js', () => ({
|
||||
|
||||
import { interactiveMode } from '../features/interactive/index.js';
|
||||
import { promptInput, confirm } from '../shared/prompt/index.js';
|
||||
import { info } from '../shared/ui/index.js';
|
||||
import { determinePiece } from '../features/tasks/execute/selectAndExecute.js';
|
||||
import { resolveIssueTask } from '../infra/github/issue.js';
|
||||
import { addTask } from '../features/tasks/index.js';
|
||||
|
||||
const mockResolveIssueTask = vi.mocked(resolveIssueTask);
|
||||
const mockInteractiveMode = vi.mocked(interactiveMode);
|
||||
const mockPromptInput = vi.mocked(promptInput);
|
||||
const mockConfirm = vi.mocked(confirm);
|
||||
const mockInfo = vi.mocked(info);
|
||||
const mockDeterminePiece = vi.mocked(determinePiece);
|
||||
const mockResolveIssueTask = vi.mocked(resolveIssueTask);
|
||||
|
||||
let testDir: string;
|
||||
|
||||
@ -96,23 +85,42 @@ afterEach(() => {
|
||||
});
|
||||
|
||||
describe('addTask', () => {
|
||||
it('should create task entry from interactive result', async () => {
|
||||
mockInteractiveMode.mockResolvedValue({ action: 'execute', task: '# 認証機能追加\nJWT認証を実装する' });
|
||||
function readOrderContent(dir: string, taskDir: unknown): string {
|
||||
return fs.readFileSync(path.join(dir, String(taskDir), 'order.md'), 'utf-8');
|
||||
}
|
||||
|
||||
it('should show usage and exit when task is missing', async () => {
|
||||
await addTask(testDir);
|
||||
|
||||
const tasks = loadTasks(testDir).tasks;
|
||||
expect(tasks).toHaveLength(1);
|
||||
expect(tasks[0]?.content).toContain('JWT認証を実装する');
|
||||
expect(tasks[0]?.piece).toBe('default');
|
||||
expect(mockInfo).toHaveBeenCalledWith('Usage: takt add <task>');
|
||||
expect(mockDeterminePiece).not.toHaveBeenCalled();
|
||||
expect(fs.existsSync(path.join(testDir, '.takt', 'tasks.yaml'))).toBe(false);
|
||||
});
|
||||
|
||||
it('should show usage and exit when task is blank', async () => {
|
||||
await addTask(testDir, ' ');
|
||||
|
||||
expect(mockInfo).toHaveBeenCalledWith('Usage: takt add <task>');
|
||||
expect(mockDeterminePiece).not.toHaveBeenCalled();
|
||||
expect(fs.existsSync(path.join(testDir, '.takt', 'tasks.yaml'))).toBe(false);
|
||||
});
|
||||
|
||||
it('should save plain text task without interactive mode', async () => {
|
||||
await addTask(testDir, ' JWT認証を実装する ');
|
||||
|
||||
expect(mockInteractiveMode).not.toHaveBeenCalled();
|
||||
const task = loadTasks(testDir).tasks[0]!;
|
||||
expect(task.content).toBeUndefined();
|
||||
expect(task.task_dir).toBeTypeOf('string');
|
||||
expect(readOrderContent(testDir, task.task_dir)).toContain('JWT認証を実装する');
|
||||
expect(task.piece).toBe('default');
|
||||
});
|
||||
|
||||
it('should include worktree settings when enabled', async () => {
|
||||
mockInteractiveMode.mockResolvedValue({ action: 'execute', task: 'Task content' });
|
||||
mockConfirm.mockResolvedValue(true);
|
||||
mockPromptInput.mockResolvedValueOnce('/custom/path').mockResolvedValueOnce('feat/branch');
|
||||
|
||||
await addTask(testDir);
|
||||
await addTask(testDir, 'Task content');
|
||||
|
||||
const task = loadTasks(testDir).tasks[0]!;
|
||||
expect(task.worktree).toBe('/custom/path');
|
||||
@ -121,20 +129,20 @@ describe('addTask', () => {
|
||||
|
||||
it('should create task from issue reference without interactive mode', async () => {
|
||||
mockResolveIssueTask.mockReturnValue('Issue #99: Fix login timeout');
|
||||
mockConfirm.mockResolvedValue(false);
|
||||
|
||||
await addTask(testDir, '#99');
|
||||
|
||||
expect(mockInteractiveMode).not.toHaveBeenCalled();
|
||||
const task = loadTasks(testDir).tasks[0]!;
|
||||
expect(task.content).toContain('Fix login timeout');
|
||||
expect(task.content).toBeUndefined();
|
||||
expect(readOrderContent(testDir, task.task_dir)).toContain('Fix login timeout');
|
||||
expect(task.issue).toBe(99);
|
||||
});
|
||||
|
||||
it('should not create task when piece selection is cancelled', async () => {
|
||||
mockDeterminePiece.mockResolvedValue(null);
|
||||
|
||||
await addTask(testDir);
|
||||
await addTask(testDir, 'Task content');
|
||||
|
||||
expect(fs.existsSync(path.join(testDir, '.takt', 'tasks.yaml'))).toBe(false);
|
||||
});
|
||||
|
||||
@ -32,7 +32,7 @@ vi.mock('../infra/config/paths.js', async (importOriginal) => {
|
||||
});
|
||||
|
||||
// Import after mocking
|
||||
const { loadGlobalConfig, saveGlobalConfig, resolveAnthropicApiKey, resolveOpenaiApiKey, invalidateGlobalConfigCache } = await import('../infra/config/global/globalConfig.js');
|
||||
const { loadGlobalConfig, saveGlobalConfig, resolveAnthropicApiKey, resolveOpenaiApiKey, resolveOpencodeApiKey, invalidateGlobalConfigCache } = await import('../infra/config/global/globalConfig.js');
|
||||
|
||||
describe('GlobalConfigSchema API key fields', () => {
|
||||
it('should accept config without API keys', () => {
|
||||
@ -280,3 +280,65 @@ describe('resolveOpenaiApiKey', () => {
|
||||
expect(key).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('resolveOpencodeApiKey', () => {
|
||||
const originalEnv = process.env['TAKT_OPENCODE_API_KEY'];
|
||||
|
||||
beforeEach(() => {
|
||||
invalidateGlobalConfigCache();
|
||||
mkdirSync(taktDir, { recursive: true });
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
if (originalEnv !== undefined) {
|
||||
process.env['TAKT_OPENCODE_API_KEY'] = originalEnv;
|
||||
} else {
|
||||
delete process.env['TAKT_OPENCODE_API_KEY'];
|
||||
}
|
||||
rmSync(testDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('should return env var when set', () => {
|
||||
process.env['TAKT_OPENCODE_API_KEY'] = 'sk-opencode-from-env';
|
||||
const yaml = [
|
||||
'language: en',
|
||||
'default_piece: default',
|
||||
'log_level: info',
|
||||
'provider: claude',
|
||||
'opencode_api_key: sk-opencode-from-yaml',
|
||||
].join('\n');
|
||||
writeFileSync(configPath, yaml, 'utf-8');
|
||||
|
||||
const key = resolveOpencodeApiKey();
|
||||
expect(key).toBe('sk-opencode-from-env');
|
||||
});
|
||||
|
||||
it('should fall back to config when env var is not set', () => {
|
||||
delete process.env['TAKT_OPENCODE_API_KEY'];
|
||||
const yaml = [
|
||||
'language: en',
|
||||
'default_piece: default',
|
||||
'log_level: info',
|
||||
'provider: claude',
|
||||
'opencode_api_key: sk-opencode-from-yaml',
|
||||
].join('\n');
|
||||
writeFileSync(configPath, yaml, 'utf-8');
|
||||
|
||||
const key = resolveOpencodeApiKey();
|
||||
expect(key).toBe('sk-opencode-from-yaml');
|
||||
});
|
||||
|
||||
it('should return undefined when neither env var nor config is set', () => {
|
||||
delete process.env['TAKT_OPENCODE_API_KEY'];
|
||||
const yaml = [
|
||||
'language: en',
|
||||
'default_piece: default',
|
||||
'log_level: info',
|
||||
'provider: claude',
|
||||
].join('\n');
|
||||
writeFileSync(configPath, yaml, 'utf-8');
|
||||
|
||||
const key = resolveOpencodeApiKey();
|
||||
expect(key).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
136
src/__tests__/arpeggio-csv.test.ts
Normal file
136
src/__tests__/arpeggio-csv.test.ts
Normal file
@ -0,0 +1,136 @@
|
||||
/**
|
||||
* Tests for CSV data source parsing and batch reading.
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import { parseCsv, CsvDataSource } from '../core/piece/arpeggio/csv-data-source.js';
|
||||
import { writeFileSync, mkdirSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { randomUUID } from 'node:crypto';
|
||||
|
||||
describe('parseCsv', () => {
|
||||
it('should parse simple CSV content', () => {
|
||||
const csv = 'name,age\nAlice,30\nBob,25';
|
||||
const result = parseCsv(csv);
|
||||
expect(result).toEqual([
|
||||
['name', 'age'],
|
||||
['Alice', '30'],
|
||||
['Bob', '25'],
|
||||
]);
|
||||
});
|
||||
|
||||
it('should handle quoted fields', () => {
|
||||
const csv = 'name,description\nAlice,"Hello, World"\nBob,"Line1"';
|
||||
const result = parseCsv(csv);
|
||||
expect(result).toEqual([
|
||||
['name', 'description'],
|
||||
['Alice', 'Hello, World'],
|
||||
['Bob', 'Line1'],
|
||||
]);
|
||||
});
|
||||
|
||||
it('should handle escaped quotes (double quotes)', () => {
|
||||
const csv = 'name,value\nAlice,"He said ""hello"""\nBob,simple';
|
||||
const result = parseCsv(csv);
|
||||
expect(result).toEqual([
|
||||
['name', 'value'],
|
||||
['Alice', 'He said "hello"'],
|
||||
['Bob', 'simple'],
|
||||
]);
|
||||
});
|
||||
|
||||
it('should handle CRLF line endings', () => {
|
||||
const csv = 'name,age\r\nAlice,30\r\nBob,25';
|
||||
const result = parseCsv(csv);
|
||||
expect(result).toEqual([
|
||||
['name', 'age'],
|
||||
['Alice', '30'],
|
||||
['Bob', '25'],
|
||||
]);
|
||||
});
|
||||
|
||||
it('should handle bare CR line endings', () => {
|
||||
const csv = 'name,age\rAlice,30\rBob,25';
|
||||
const result = parseCsv(csv);
|
||||
expect(result).toEqual([
|
||||
['name', 'age'],
|
||||
['Alice', '30'],
|
||||
['Bob', '25'],
|
||||
]);
|
||||
});
|
||||
|
||||
it('should handle empty fields', () => {
|
||||
const csv = 'a,b,c\n1,,3\n,,';
|
||||
const result = parseCsv(csv);
|
||||
expect(result).toEqual([
|
||||
['a', 'b', 'c'],
|
||||
['1', '', '3'],
|
||||
['', '', ''],
|
||||
]);
|
||||
});
|
||||
|
||||
it('should handle newlines within quoted fields', () => {
|
||||
const csv = 'name,bio\nAlice,"Line1\nLine2"\nBob,simple';
|
||||
const result = parseCsv(csv);
|
||||
expect(result).toEqual([
|
||||
['name', 'bio'],
|
||||
['Alice', 'Line1\nLine2'],
|
||||
['Bob', 'simple'],
|
||||
]);
|
||||
});
|
||||
});
|
||||
|
||||
describe('CsvDataSource', () => {
|
||||
function createTempCsv(content: string): string {
|
||||
const dir = join(tmpdir(), `takt-csv-test-${randomUUID()}`);
|
||||
mkdirSync(dir, { recursive: true });
|
||||
const filePath = join(dir, 'test.csv');
|
||||
writeFileSync(filePath, content, 'utf-8');
|
||||
return filePath;
|
||||
}
|
||||
|
||||
it('should read batches with batch_size 1', async () => {
|
||||
const filePath = createTempCsv('name,age\nAlice,30\nBob,25\nCharlie,35');
|
||||
const source = new CsvDataSource(filePath);
|
||||
const batches = await source.readBatches(1);
|
||||
|
||||
expect(batches).toHaveLength(3);
|
||||
expect(batches[0]!.rows).toEqual([{ name: 'Alice', age: '30' }]);
|
||||
expect(batches[0]!.batchIndex).toBe(0);
|
||||
expect(batches[0]!.totalBatches).toBe(3);
|
||||
expect(batches[1]!.rows).toEqual([{ name: 'Bob', age: '25' }]);
|
||||
expect(batches[2]!.rows).toEqual([{ name: 'Charlie', age: '35' }]);
|
||||
});
|
||||
|
||||
it('should read batches with batch_size 2', async () => {
|
||||
const filePath = createTempCsv('name,age\nAlice,30\nBob,25\nCharlie,35');
|
||||
const source = new CsvDataSource(filePath);
|
||||
const batches = await source.readBatches(2);
|
||||
|
||||
expect(batches).toHaveLength(2);
|
||||
expect(batches[0]!.rows).toEqual([
|
||||
{ name: 'Alice', age: '30' },
|
||||
{ name: 'Bob', age: '25' },
|
||||
]);
|
||||
expect(batches[0]!.totalBatches).toBe(2);
|
||||
expect(batches[1]!.rows).toEqual([
|
||||
{ name: 'Charlie', age: '35' },
|
||||
]);
|
||||
});
|
||||
|
||||
it('should throw when CSV has no data rows', async () => {
|
||||
const filePath = createTempCsv('name,age');
|
||||
const source = new CsvDataSource(filePath);
|
||||
await expect(source.readBatches(1)).rejects.toThrow('CSV file has no data rows');
|
||||
});
|
||||
|
||||
it('should handle missing columns by returning empty string', async () => {
|
||||
const filePath = createTempCsv('a,b,c\n1,2\n3');
|
||||
const source = new CsvDataSource(filePath);
|
||||
const batches = await source.readBatches(1);
|
||||
|
||||
expect(batches[0]!.rows).toEqual([{ a: '1', b: '2', c: '' }]);
|
||||
expect(batches[1]!.rows).toEqual([{ a: '3', b: '', c: '' }]);
|
||||
});
|
||||
});
|
||||
50
src/__tests__/arpeggio-data-source-factory.test.ts
Normal file
50
src/__tests__/arpeggio-data-source-factory.test.ts
Normal file
@ -0,0 +1,50 @@
|
||||
/**
|
||||
* Tests for the arpeggio data source factory.
|
||||
*
|
||||
* Covers:
|
||||
* - Built-in 'csv' source type returns CsvDataSource
|
||||
* - Custom module: valid default export returns a data source
|
||||
* - Custom module: non-function default export throws
|
||||
* - Custom module: missing default export throws
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import { createDataSource } from '../core/piece/arpeggio/data-source-factory.js';
|
||||
import { CsvDataSource } from '../core/piece/arpeggio/csv-data-source.js';
|
||||
|
||||
describe('createDataSource', () => {
|
||||
it('should return a CsvDataSource for built-in "csv" type', async () => {
|
||||
const source = await createDataSource('csv', '/path/to/data.csv');
|
||||
expect(source).toBeInstanceOf(CsvDataSource);
|
||||
});
|
||||
|
||||
it('should return a valid data source from a custom module with correct default export', async () => {
|
||||
const tempModulePath = new URL(
|
||||
'data:text/javascript,export default function(path) { return { readBatches: async () => [] }; }'
|
||||
).href;
|
||||
|
||||
const source = await createDataSource(tempModulePath, '/some/path');
|
||||
expect(source).toBeDefined();
|
||||
expect(typeof source.readBatches).toBe('function');
|
||||
});
|
||||
|
||||
it('should throw when custom module does not export a default function', async () => {
|
||||
const tempModulePath = new URL(
|
||||
'data:text/javascript,export default "not-a-function"'
|
||||
).href;
|
||||
|
||||
await expect(createDataSource(tempModulePath, '/some/path')).rejects.toThrow(
|
||||
/must export a default factory function/
|
||||
);
|
||||
});
|
||||
|
||||
it('should throw when custom module has no default export', async () => {
|
||||
const tempModulePath = new URL(
|
||||
'data:text/javascript,export const foo = 42'
|
||||
).href;
|
||||
|
||||
await expect(createDataSource(tempModulePath, '/some/path')).rejects.toThrow(
|
||||
/must export a default factory function/
|
||||
);
|
||||
});
|
||||
});
|
||||
108
src/__tests__/arpeggio-merge.test.ts
Normal file
108
src/__tests__/arpeggio-merge.test.ts
Normal file
@ -0,0 +1,108 @@
|
||||
/**
|
||||
* Tests for arpeggio merge processing.
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import { buildMergeFn } from '../core/piece/arpeggio/merge.js';
|
||||
import type { ArpeggioMergeMovementConfig } from '../core/piece/arpeggio/types.js';
|
||||
import type { BatchResult } from '../core/piece/arpeggio/types.js';
|
||||
|
||||
function makeResult(batchIndex: number, content: string, success = true): BatchResult {
|
||||
return { batchIndex, content, success };
|
||||
}
|
||||
|
||||
function makeFailedResult(batchIndex: number, error: string): BatchResult {
|
||||
return { batchIndex, content: '', success: false, error };
|
||||
}
|
||||
|
||||
describe('buildMergeFn', () => {
|
||||
describe('concat strategy', () => {
|
||||
it('should concatenate results with default separator (newline)', async () => {
|
||||
const config: ArpeggioMergeMovementConfig = { strategy: 'concat' };
|
||||
const mergeFn = await buildMergeFn(config);
|
||||
const results = [
|
||||
makeResult(0, 'Result A'),
|
||||
makeResult(1, 'Result B'),
|
||||
makeResult(2, 'Result C'),
|
||||
];
|
||||
expect(mergeFn(results)).toBe('Result A\nResult B\nResult C');
|
||||
});
|
||||
|
||||
it('should concatenate results with custom separator', async () => {
|
||||
const config: ArpeggioMergeMovementConfig = { strategy: 'concat', separator: '\n---\n' };
|
||||
const mergeFn = await buildMergeFn(config);
|
||||
const results = [
|
||||
makeResult(0, 'A'),
|
||||
makeResult(1, 'B'),
|
||||
];
|
||||
expect(mergeFn(results)).toBe('A\n---\nB');
|
||||
});
|
||||
|
||||
it('should sort results by batch index', async () => {
|
||||
const config: ArpeggioMergeMovementConfig = { strategy: 'concat' };
|
||||
const mergeFn = await buildMergeFn(config);
|
||||
const results = [
|
||||
makeResult(2, 'C'),
|
||||
makeResult(0, 'A'),
|
||||
makeResult(1, 'B'),
|
||||
];
|
||||
expect(mergeFn(results)).toBe('A\nB\nC');
|
||||
});
|
||||
|
||||
it('should filter out failed results', async () => {
|
||||
const config: ArpeggioMergeMovementConfig = { strategy: 'concat' };
|
||||
const mergeFn = await buildMergeFn(config);
|
||||
const results = [
|
||||
makeResult(0, 'A'),
|
||||
makeFailedResult(1, 'oops'),
|
||||
makeResult(2, 'C'),
|
||||
];
|
||||
expect(mergeFn(results)).toBe('A\nC');
|
||||
});
|
||||
|
||||
it('should return empty string when all results failed', async () => {
|
||||
const config: ArpeggioMergeMovementConfig = { strategy: 'concat' };
|
||||
const mergeFn = await buildMergeFn(config);
|
||||
const results = [
|
||||
makeFailedResult(0, 'error1'),
|
||||
makeFailedResult(1, 'error2'),
|
||||
];
|
||||
expect(mergeFn(results)).toBe('');
|
||||
});
|
||||
});
|
||||
|
||||
describe('custom strategy with inline_js', () => {
|
||||
it('should execute inline JS merge function', async () => {
|
||||
const config: ArpeggioMergeMovementConfig = {
|
||||
strategy: 'custom',
|
||||
inlineJs: 'return results.filter(r => r.success).map(r => r.content.toUpperCase()).join(", ");',
|
||||
};
|
||||
const mergeFn = await buildMergeFn(config);
|
||||
const results = [
|
||||
makeResult(0, 'hello'),
|
||||
makeResult(1, 'world'),
|
||||
];
|
||||
expect(mergeFn(results)).toBe('HELLO, WORLD');
|
||||
});
|
||||
|
||||
it('should throw when inline JS returns non-string', async () => {
|
||||
const config: ArpeggioMergeMovementConfig = {
|
||||
strategy: 'custom',
|
||||
inlineJs: 'return 42;',
|
||||
};
|
||||
const mergeFn = await buildMergeFn(config);
|
||||
expect(() => mergeFn([makeResult(0, 'test')])).toThrow(
|
||||
'Inline JS merge function must return a string, got number'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('custom strategy validation', () => {
|
||||
it('should throw when custom strategy has neither inline_js nor file', async () => {
|
||||
const config: ArpeggioMergeMovementConfig = { strategy: 'custom' };
|
||||
await expect(buildMergeFn(config)).rejects.toThrow(
|
||||
'Custom merge strategy requires either inline_js or file path'
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
332
src/__tests__/arpeggio-schema.test.ts
Normal file
332
src/__tests__/arpeggio-schema.test.ts
Normal file
@ -0,0 +1,332 @@
|
||||
/**
|
||||
* Tests for Arpeggio-related Zod schemas.
|
||||
*
|
||||
* Covers:
|
||||
* - ArpeggioMergeRawSchema cross-validation (.refine())
|
||||
* - ArpeggioConfigRawSchema required fields and defaults
|
||||
* - PieceMovementRawSchema with arpeggio field
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import {
|
||||
ArpeggioMergeRawSchema,
|
||||
ArpeggioConfigRawSchema,
|
||||
PieceMovementRawSchema,
|
||||
} from '../core/models/index.js';
|
||||
|
||||
describe('ArpeggioMergeRawSchema', () => {
|
||||
it('should accept concat strategy without inline_js or file', () => {
|
||||
const result = ArpeggioMergeRawSchema.safeParse({
|
||||
strategy: 'concat',
|
||||
});
|
||||
expect(result.success).toBe(true);
|
||||
});
|
||||
|
||||
it('should accept concat strategy with separator', () => {
|
||||
const result = ArpeggioMergeRawSchema.safeParse({
|
||||
strategy: 'concat',
|
||||
separator: '\n---\n',
|
||||
});
|
||||
expect(result.success).toBe(true);
|
||||
if (result.success) {
|
||||
expect(result.data.separator).toBe('\n---\n');
|
||||
}
|
||||
});
|
||||
|
||||
it('should default strategy to concat when omitted', () => {
|
||||
const result = ArpeggioMergeRawSchema.safeParse({});
|
||||
expect(result.success).toBe(true);
|
||||
if (result.success) {
|
||||
expect(result.data.strategy).toBe('concat');
|
||||
}
|
||||
});
|
||||
|
||||
it('should accept custom strategy with inline_js', () => {
|
||||
const result = ArpeggioMergeRawSchema.safeParse({
|
||||
strategy: 'custom',
|
||||
inline_js: 'return results.map(r => r.content).join(",");',
|
||||
});
|
||||
expect(result.success).toBe(true);
|
||||
});
|
||||
|
||||
it('should accept custom strategy with file', () => {
|
||||
const result = ArpeggioMergeRawSchema.safeParse({
|
||||
strategy: 'custom',
|
||||
file: './merge.js',
|
||||
});
|
||||
expect(result.success).toBe(true);
|
||||
});
|
||||
|
||||
it('should reject custom strategy without inline_js or file', () => {
|
||||
const result = ArpeggioMergeRawSchema.safeParse({
|
||||
strategy: 'custom',
|
||||
});
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('should reject concat strategy with inline_js', () => {
|
||||
const result = ArpeggioMergeRawSchema.safeParse({
|
||||
strategy: 'concat',
|
||||
inline_js: 'return "hello";',
|
||||
});
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('should reject concat strategy with file', () => {
|
||||
const result = ArpeggioMergeRawSchema.safeParse({
|
||||
strategy: 'concat',
|
||||
file: './merge.js',
|
||||
});
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('should reject invalid strategy value', () => {
|
||||
const result = ArpeggioMergeRawSchema.safeParse({
|
||||
strategy: 'invalid',
|
||||
});
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('ArpeggioConfigRawSchema', () => {
|
||||
const validConfig = {
|
||||
source: 'csv',
|
||||
source_path: './data.csv',
|
||||
template: './template.md',
|
||||
};
|
||||
|
||||
it('should accept a valid minimal config', () => {
|
||||
const result = ArpeggioConfigRawSchema.safeParse(validConfig);
|
||||
expect(result.success).toBe(true);
|
||||
});
|
||||
|
||||
it('should apply default values for optional fields', () => {
|
||||
const result = ArpeggioConfigRawSchema.safeParse(validConfig);
|
||||
expect(result.success).toBe(true);
|
||||
if (result.success) {
|
||||
expect(result.data.batch_size).toBe(1);
|
||||
expect(result.data.concurrency).toBe(1);
|
||||
expect(result.data.max_retries).toBe(2);
|
||||
expect(result.data.retry_delay_ms).toBe(1000);
|
||||
}
|
||||
});
|
||||
|
||||
it('should accept explicit values overriding defaults', () => {
|
||||
const result = ArpeggioConfigRawSchema.safeParse({
|
||||
...validConfig,
|
||||
batch_size: 5,
|
||||
concurrency: 3,
|
||||
max_retries: 4,
|
||||
retry_delay_ms: 2000,
|
||||
});
|
||||
expect(result.success).toBe(true);
|
||||
if (result.success) {
|
||||
expect(result.data.batch_size).toBe(5);
|
||||
expect(result.data.concurrency).toBe(3);
|
||||
expect(result.data.max_retries).toBe(4);
|
||||
expect(result.data.retry_delay_ms).toBe(2000);
|
||||
}
|
||||
});
|
||||
|
||||
it('should accept config with merge field', () => {
|
||||
const result = ArpeggioConfigRawSchema.safeParse({
|
||||
...validConfig,
|
||||
merge: { strategy: 'concat', separator: '---' },
|
||||
});
|
||||
expect(result.success).toBe(true);
|
||||
});
|
||||
|
||||
it('should accept config with output_path', () => {
|
||||
const result = ArpeggioConfigRawSchema.safeParse({
|
||||
...validConfig,
|
||||
output_path: './output.txt',
|
||||
});
|
||||
expect(result.success).toBe(true);
|
||||
if (result.success) {
|
||||
expect(result.data.output_path).toBe('./output.txt');
|
||||
}
|
||||
});
|
||||
|
||||
it('should reject when source is empty', () => {
|
||||
const result = ArpeggioConfigRawSchema.safeParse({
|
||||
...validConfig,
|
||||
source: '',
|
||||
});
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('should reject when source is missing', () => {
|
||||
const { source: _, ...noSource } = validConfig;
|
||||
const result = ArpeggioConfigRawSchema.safeParse(noSource);
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('should reject when source_path is empty', () => {
|
||||
const result = ArpeggioConfigRawSchema.safeParse({
|
||||
...validConfig,
|
||||
source_path: '',
|
||||
});
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('should reject when source_path is missing', () => {
|
||||
const { source_path: _, ...noSourcePath } = validConfig;
|
||||
const result = ArpeggioConfigRawSchema.safeParse(noSourcePath);
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('should reject when template is empty', () => {
|
||||
const result = ArpeggioConfigRawSchema.safeParse({
|
||||
...validConfig,
|
||||
template: '',
|
||||
});
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('should reject when template is missing', () => {
|
||||
const { template: _, ...noTemplate } = validConfig;
|
||||
const result = ArpeggioConfigRawSchema.safeParse(noTemplate);
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('should reject batch_size of 0', () => {
|
||||
const result = ArpeggioConfigRawSchema.safeParse({
|
||||
...validConfig,
|
||||
batch_size: 0,
|
||||
});
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('should reject negative batch_size', () => {
|
||||
const result = ArpeggioConfigRawSchema.safeParse({
|
||||
...validConfig,
|
||||
batch_size: -1,
|
||||
});
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('should reject concurrency of 0', () => {
|
||||
const result = ArpeggioConfigRawSchema.safeParse({
|
||||
...validConfig,
|
||||
concurrency: 0,
|
||||
});
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('should reject negative concurrency', () => {
|
||||
const result = ArpeggioConfigRawSchema.safeParse({
|
||||
...validConfig,
|
||||
concurrency: -1,
|
||||
});
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('should reject negative max_retries', () => {
|
||||
const result = ArpeggioConfigRawSchema.safeParse({
|
||||
...validConfig,
|
||||
max_retries: -1,
|
||||
});
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('should accept max_retries of 0', () => {
|
||||
const result = ArpeggioConfigRawSchema.safeParse({
|
||||
...validConfig,
|
||||
max_retries: 0,
|
||||
});
|
||||
expect(result.success).toBe(true);
|
||||
if (result.success) {
|
||||
expect(result.data.max_retries).toBe(0);
|
||||
}
|
||||
});
|
||||
|
||||
it('should reject non-integer batch_size', () => {
|
||||
const result = ArpeggioConfigRawSchema.safeParse({
|
||||
...validConfig,
|
||||
batch_size: 1.5,
|
||||
});
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('PieceMovementRawSchema with arpeggio', () => {
|
||||
it('should accept a movement with arpeggio config', () => {
|
||||
const raw = {
|
||||
name: 'batch-process',
|
||||
arpeggio: {
|
||||
source: 'csv',
|
||||
source_path: './data.csv',
|
||||
template: './prompt.md',
|
||||
},
|
||||
};
|
||||
|
||||
const result = PieceMovementRawSchema.safeParse(raw);
|
||||
expect(result.success).toBe(true);
|
||||
if (result.success) {
|
||||
expect(result.data.arpeggio).toBeDefined();
|
||||
expect(result.data.arpeggio!.source).toBe('csv');
|
||||
}
|
||||
});
|
||||
|
||||
it('should accept a movement with arpeggio and rules', () => {
|
||||
const raw = {
|
||||
name: 'batch-process',
|
||||
arpeggio: {
|
||||
source: 'csv',
|
||||
source_path: './data.csv',
|
||||
template: './prompt.md',
|
||||
batch_size: 2,
|
||||
concurrency: 3,
|
||||
},
|
||||
rules: [
|
||||
{ condition: 'All processed', next: 'COMPLETE' },
|
||||
{ condition: 'Errors found', next: 'fix' },
|
||||
],
|
||||
};
|
||||
|
||||
const result = PieceMovementRawSchema.safeParse(raw);
|
||||
expect(result.success).toBe(true);
|
||||
if (result.success) {
|
||||
expect(result.data.arpeggio!.batch_size).toBe(2);
|
||||
expect(result.data.arpeggio!.concurrency).toBe(3);
|
||||
expect(result.data.rules).toHaveLength(2);
|
||||
}
|
||||
});
|
||||
|
||||
it('should accept a movement without arpeggio (normal movement)', () => {
|
||||
const raw = {
|
||||
name: 'normal-step',
|
||||
persona: 'coder.md',
|
||||
instruction_template: 'Do work',
|
||||
};
|
||||
|
||||
const result = PieceMovementRawSchema.safeParse(raw);
|
||||
expect(result.success).toBe(true);
|
||||
if (result.success) {
|
||||
expect(result.data.arpeggio).toBeUndefined();
|
||||
}
|
||||
});
|
||||
|
||||
it('should accept a movement with arpeggio including custom merge', () => {
|
||||
const raw = {
|
||||
name: 'custom-merge-step',
|
||||
arpeggio: {
|
||||
source: 'csv',
|
||||
source_path: './data.csv',
|
||||
template: './prompt.md',
|
||||
merge: {
|
||||
strategy: 'custom',
|
||||
inline_js: 'return results.map(r => r.content).join(", ");',
|
||||
},
|
||||
output_path: './output.txt',
|
||||
},
|
||||
};
|
||||
|
||||
const result = PieceMovementRawSchema.safeParse(raw);
|
||||
expect(result.success).toBe(true);
|
||||
if (result.success) {
|
||||
expect(result.data.arpeggio!.merge).toBeDefined();
|
||||
expect(result.data.arpeggio!.output_path).toBe('./output.txt');
|
||||
}
|
||||
});
|
||||
});
|
||||
83
src/__tests__/arpeggio-template.test.ts
Normal file
83
src/__tests__/arpeggio-template.test.ts
Normal file
@ -0,0 +1,83 @@
|
||||
/**
|
||||
* Tests for arpeggio template expansion.
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import { expandTemplate } from '../core/piece/arpeggio/template.js';
|
||||
import type { DataBatch } from '../core/piece/arpeggio/types.js';
|
||||
|
||||
function makeBatch(rows: Record<string, string>[], batchIndex = 0, totalBatches = 1): DataBatch {
|
||||
return { rows, batchIndex, totalBatches };
|
||||
}
|
||||
|
||||
describe('expandTemplate', () => {
|
||||
it('should expand {line:1} with formatted row data', () => {
|
||||
const batch = makeBatch([{ name: 'Alice', age: '30' }]);
|
||||
const result = expandTemplate('Process this: {line:1}', batch);
|
||||
expect(result).toBe('Process this: name: Alice\nage: 30');
|
||||
});
|
||||
|
||||
it('should expand {line:1} and {line:2} for multi-row batches', () => {
|
||||
const batch = makeBatch([
|
||||
{ name: 'Alice', age: '30' },
|
||||
{ name: 'Bob', age: '25' },
|
||||
]);
|
||||
const result = expandTemplate('Row 1: {line:1}\nRow 2: {line:2}', batch);
|
||||
expect(result).toBe('Row 1: name: Alice\nage: 30\nRow 2: name: Bob\nage: 25');
|
||||
});
|
||||
|
||||
it('should expand {col:N:name} with specific column values', () => {
|
||||
const batch = makeBatch([{ name: 'Alice', age: '30', city: 'Tokyo' }]);
|
||||
const result = expandTemplate('Name: {col:1:name}, City: {col:1:city}', batch);
|
||||
expect(result).toBe('Name: Alice, City: Tokyo');
|
||||
});
|
||||
|
||||
it('should expand {batch_index} and {total_batches}', () => {
|
||||
const batch = makeBatch([{ name: 'Alice' }], 2, 5);
|
||||
const result = expandTemplate('Batch {batch_index} of {total_batches}', batch);
|
||||
expect(result).toBe('Batch 2 of 5');
|
||||
});
|
||||
|
||||
it('should expand all placeholder types in a single template', () => {
|
||||
const batch = makeBatch([
|
||||
{ name: 'Alice', role: 'dev' },
|
||||
{ name: 'Bob', role: 'pm' },
|
||||
], 0, 3);
|
||||
const template = 'Batch {batch_index}/{total_batches}\nFirst: {col:1:name}\nSecond: {line:2}';
|
||||
const result = expandTemplate(template, batch);
|
||||
expect(result).toBe('Batch 0/3\nFirst: Alice\nSecond: name: Bob\nrole: pm');
|
||||
});
|
||||
|
||||
it('should throw when {line:N} references out-of-range row', () => {
|
||||
const batch = makeBatch([{ name: 'Alice' }]);
|
||||
expect(() => expandTemplate('{line:2}', batch)).toThrow(
|
||||
'Template placeholder {line:2} references row 2 but batch has 1 rows'
|
||||
);
|
||||
});
|
||||
|
||||
it('should throw when {col:N:name} references out-of-range row', () => {
|
||||
const batch = makeBatch([{ name: 'Alice' }]);
|
||||
expect(() => expandTemplate('{col:2:name}', batch)).toThrow(
|
||||
'Template placeholder {col:2:name} references row 2 but batch has 1 rows'
|
||||
);
|
||||
});
|
||||
|
||||
it('should throw when {col:N:name} references unknown column', () => {
|
||||
const batch = makeBatch([{ name: 'Alice' }]);
|
||||
expect(() => expandTemplate('{col:1:missing}', batch)).toThrow(
|
||||
'Template placeholder {col:1:missing} references unknown column "missing"'
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle templates with no placeholders', () => {
|
||||
const batch = makeBatch([{ name: 'Alice' }]);
|
||||
const result = expandTemplate('No placeholders here', batch);
|
||||
expect(result).toBe('No placeholders here');
|
||||
});
|
||||
|
||||
it('should handle multiple occurrences of the same placeholder', () => {
|
||||
const batch = makeBatch([{ name: 'Alice' }], 1, 3);
|
||||
const result = expandTemplate('{batch_index} and {batch_index}', batch);
|
||||
expect(result).toBe('1 and 1');
|
||||
});
|
||||
});
|
||||
@ -11,6 +11,11 @@ import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
vi.mock('../shared/ui/index.js', () => ({
|
||||
info: vi.fn(),
|
||||
error: vi.fn(),
|
||||
withProgress: vi.fn(async (_start, _done, operation) => operation()),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/prompt/index.js', () => ({
|
||||
confirm: vi.fn(() => true),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
||||
@ -46,6 +51,7 @@ vi.mock('../features/pipeline/index.js', () => ({
|
||||
vi.mock('../features/interactive/index.js', () => ({
|
||||
interactiveMode: vi.fn(),
|
||||
selectInteractiveMode: vi.fn(() => 'assistant'),
|
||||
selectRecentSession: vi.fn(() => null),
|
||||
passthroughMode: vi.fn(),
|
||||
quietMode: vi.fn(),
|
||||
personaMode: vi.fn(),
|
||||
@ -83,8 +89,10 @@ vi.mock('../app/cli/helpers.js', () => ({
|
||||
}));
|
||||
|
||||
import { checkGhCli, fetchIssue, formatIssueAsTask, parseIssueNumbers } from '../infra/github/issue.js';
|
||||
import { selectAndExecuteTask, determinePiece } from '../features/tasks/index.js';
|
||||
import { interactiveMode } from '../features/interactive/index.js';
|
||||
import { selectAndExecuteTask, determinePiece, createIssueFromTask, saveTaskFromInteractive } from '../features/tasks/index.js';
|
||||
import { interactiveMode, selectRecentSession } from '../features/interactive/index.js';
|
||||
import { loadGlobalConfig } from '../infra/config/index.js';
|
||||
import { confirm } from '../shared/prompt/index.js';
|
||||
import { isDirectTask } from '../app/cli/helpers.js';
|
||||
import { executeDefaultAction } from '../app/cli/routing.js';
|
||||
import type { GitHubIssue } from '../infra/github/types.js';
|
||||
@ -95,7 +103,12 @@ const mockFormatIssueAsTask = vi.mocked(formatIssueAsTask);
|
||||
const mockParseIssueNumbers = vi.mocked(parseIssueNumbers);
|
||||
const mockSelectAndExecuteTask = vi.mocked(selectAndExecuteTask);
|
||||
const mockDeterminePiece = vi.mocked(determinePiece);
|
||||
const mockCreateIssueFromTask = vi.mocked(createIssueFromTask);
|
||||
const mockSaveTaskFromInteractive = vi.mocked(saveTaskFromInteractive);
|
||||
const mockInteractiveMode = vi.mocked(interactiveMode);
|
||||
const mockSelectRecentSession = vi.mocked(selectRecentSession);
|
||||
const mockLoadGlobalConfig = vi.mocked(loadGlobalConfig);
|
||||
const mockConfirm = vi.mocked(confirm);
|
||||
const mockIsDirectTask = vi.mocked(isDirectTask);
|
||||
|
||||
function createMockIssue(number: number): GitHubIssue {
|
||||
@ -117,6 +130,7 @@ beforeEach(() => {
|
||||
// Default setup
|
||||
mockDeterminePiece.mockResolvedValue('default');
|
||||
mockInteractiveMode.mockResolvedValue({ action: 'execute', task: 'summarized task' });
|
||||
mockConfirm.mockResolvedValue(true);
|
||||
mockIsDirectTask.mockReturnValue(false);
|
||||
mockParseIssueNumbers.mockReturnValue([]);
|
||||
});
|
||||
@ -142,6 +156,7 @@ describe('Issue resolution in routing', () => {
|
||||
'/test/cwd',
|
||||
'## GitHub Issue #131: Issue #131',
|
||||
expect.anything(),
|
||||
undefined,
|
||||
);
|
||||
|
||||
// Then: selectAndExecuteTask should receive issues in options
|
||||
@ -194,6 +209,7 @@ describe('Issue resolution in routing', () => {
|
||||
'/test/cwd',
|
||||
'## GitHub Issue #131: Issue #131',
|
||||
expect.anything(),
|
||||
undefined,
|
||||
);
|
||||
|
||||
// Then: selectAndExecuteTask should receive issues
|
||||
@ -218,6 +234,7 @@ describe('Issue resolution in routing', () => {
|
||||
'/test/cwd',
|
||||
'refactor the code',
|
||||
expect.anything(),
|
||||
undefined,
|
||||
);
|
||||
|
||||
// Then: no issue fetching should occur
|
||||
@ -237,6 +254,7 @@ describe('Issue resolution in routing', () => {
|
||||
'/test/cwd',
|
||||
undefined,
|
||||
expect.anything(),
|
||||
undefined,
|
||||
);
|
||||
|
||||
// Then: no issue fetching should occur
|
||||
@ -261,4 +279,112 @@ describe('Issue resolution in routing', () => {
|
||||
expect(mockSelectAndExecuteTask).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('create_issue action', () => {
|
||||
it('should create issue first, then delegate final confirmation to saveTaskFromInteractive', async () => {
|
||||
// Given
|
||||
mockInteractiveMode.mockResolvedValue({ action: 'create_issue', task: 'New feature request' });
|
||||
mockCreateIssueFromTask.mockReturnValue(226);
|
||||
|
||||
// When
|
||||
await executeDefaultAction();
|
||||
|
||||
// Then: issue is created first
|
||||
expect(mockCreateIssueFromTask).toHaveBeenCalledWith('New feature request');
|
||||
// Then: saveTaskFromInteractive receives final confirmation message
|
||||
expect(mockSaveTaskFromInteractive).toHaveBeenCalledWith(
|
||||
'/test/cwd',
|
||||
'New feature request',
|
||||
'default',
|
||||
{ issue: 226, confirmAtEndMessage: 'Add this issue to tasks?' },
|
||||
);
|
||||
});
|
||||
|
||||
it('should skip confirmation and task save when issue creation fails', async () => {
|
||||
// Given
|
||||
mockInteractiveMode.mockResolvedValue({ action: 'create_issue', task: 'New feature request' });
|
||||
mockCreateIssueFromTask.mockReturnValue(undefined);
|
||||
|
||||
// When
|
||||
await executeDefaultAction();
|
||||
|
||||
// Then
|
||||
expect(mockCreateIssueFromTask).toHaveBeenCalledWith('New feature request');
|
||||
expect(mockSaveTaskFromInteractive).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should not call selectAndExecuteTask when create_issue action is chosen', async () => {
|
||||
// Given
|
||||
mockInteractiveMode.mockResolvedValue({ action: 'create_issue', task: 'New feature request' });
|
||||
|
||||
// When
|
||||
await executeDefaultAction();
|
||||
|
||||
// Then: selectAndExecuteTask should NOT be called
|
||||
expect(mockSelectAndExecuteTask).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('session selection with provider=claude', () => {
|
||||
it('should pass selected session ID to interactiveMode when provider is claude', async () => {
|
||||
// Given
|
||||
mockLoadGlobalConfig.mockReturnValue({ interactivePreviewMovements: 3, provider: 'claude' });
|
||||
mockConfirm.mockResolvedValue(true);
|
||||
mockSelectRecentSession.mockResolvedValue('session-xyz');
|
||||
|
||||
// When
|
||||
await executeDefaultAction();
|
||||
|
||||
// Then: selectRecentSession should be called
|
||||
expect(mockSelectRecentSession).toHaveBeenCalledWith('/test/cwd', 'en');
|
||||
|
||||
// Then: interactiveMode should receive the session ID as 4th argument
|
||||
expect(mockInteractiveMode).toHaveBeenCalledWith(
|
||||
'/test/cwd',
|
||||
undefined,
|
||||
expect.anything(),
|
||||
'session-xyz',
|
||||
);
|
||||
|
||||
expect(mockConfirm).toHaveBeenCalledWith('Choose a previous session?', false);
|
||||
});
|
||||
|
||||
it('should not call selectRecentSession when user selects no in confirmation', async () => {
|
||||
// Given
|
||||
mockLoadGlobalConfig.mockReturnValue({ interactivePreviewMovements: 3, provider: 'claude' });
|
||||
mockConfirm.mockResolvedValue(false);
|
||||
|
||||
// When
|
||||
await executeDefaultAction();
|
||||
|
||||
// Then
|
||||
expect(mockConfirm).toHaveBeenCalledWith('Choose a previous session?', false);
|
||||
expect(mockSelectRecentSession).not.toHaveBeenCalled();
|
||||
expect(mockInteractiveMode).toHaveBeenCalledWith(
|
||||
'/test/cwd',
|
||||
undefined,
|
||||
expect.anything(),
|
||||
undefined,
|
||||
);
|
||||
});
|
||||
|
||||
it('should not call selectRecentSession when provider is not claude', async () => {
|
||||
// Given
|
||||
mockLoadGlobalConfig.mockReturnValue({ interactivePreviewMovements: 3, provider: 'openai' });
|
||||
|
||||
// When
|
||||
await executeDefaultAction();
|
||||
|
||||
// Then: selectRecentSession should NOT be called
|
||||
expect(mockSelectRecentSession).not.toHaveBeenCalled();
|
||||
|
||||
// Then: interactiveMode should be called with undefined session ID
|
||||
expect(mockInteractiveMode).toHaveBeenCalledWith(
|
||||
'/test/cwd',
|
||||
undefined,
|
||||
expect.anything(),
|
||||
undefined,
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@ -28,14 +28,23 @@ vi.mock('../infra/task/summarize.js', () => ({
|
||||
summarizeTaskName: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/ui/index.js', () => ({
|
||||
info: vi.fn(),
|
||||
error: vi.fn(),
|
||||
success: vi.fn(),
|
||||
header: vi.fn(),
|
||||
status: vi.fn(),
|
||||
setLogLevel: vi.fn(),
|
||||
}));
|
||||
vi.mock('../shared/ui/index.js', () => {
|
||||
const info = vi.fn();
|
||||
return {
|
||||
info,
|
||||
error: vi.fn(),
|
||||
success: vi.fn(),
|
||||
header: vi.fn(),
|
||||
status: vi.fn(),
|
||||
setLogLevel: vi.fn(),
|
||||
withProgress: vi.fn(async (start, done, operation) => {
|
||||
info(start);
|
||||
const result = await operation();
|
||||
info(typeof done === 'function' ? done(result) : done);
|
||||
return result;
|
||||
}),
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
||||
...(await importOriginal<Record<string, unknown>>()),
|
||||
@ -199,6 +208,7 @@ describe('confirmAndCreateWorktree', () => {
|
||||
|
||||
// Then
|
||||
expect(mockInfo).toHaveBeenCalledWith('Generating branch name...');
|
||||
expect(mockInfo).toHaveBeenCalledWith('Branch name generated: test-task');
|
||||
});
|
||||
|
||||
it('should skip prompt when override is false', async () => {
|
||||
|
||||
@ -188,7 +188,7 @@ describe('loadAllPieces', () => {
|
||||
const samplePiece = `
|
||||
name: test-piece
|
||||
description: Test piece
|
||||
max_iterations: 10
|
||||
max_movements: 10
|
||||
movements:
|
||||
- name: step1
|
||||
persona: coder
|
||||
|
||||
@ -114,6 +114,42 @@ describe('createIssueFromTask', () => {
|
||||
expect(mockSuccess).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
describe('return value', () => {
|
||||
it('should return issue number when creation succeeds', () => {
|
||||
// Given
|
||||
mockCreateIssue.mockReturnValue({ success: true, url: 'https://github.com/owner/repo/issues/42' });
|
||||
|
||||
// When
|
||||
const result = createIssueFromTask('Test task');
|
||||
|
||||
// Then
|
||||
expect(result).toBe(42);
|
||||
});
|
||||
|
||||
it('should return undefined when creation fails', () => {
|
||||
// Given
|
||||
mockCreateIssue.mockReturnValue({ success: false, error: 'auth failed' });
|
||||
|
||||
// When
|
||||
const result = createIssueFromTask('Test task');
|
||||
|
||||
// Then
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should return undefined and display error when URL has non-numeric suffix', () => {
|
||||
// Given
|
||||
mockCreateIssue.mockReturnValue({ success: true, url: 'https://github.com/owner/repo/issues/abc' });
|
||||
|
||||
// When
|
||||
const result = createIssueFromTask('Test task');
|
||||
|
||||
// Then
|
||||
expect(result).toBeUndefined();
|
||||
expect(mockError).toHaveBeenCalledWith('Failed to extract issue number from URL');
|
||||
});
|
||||
});
|
||||
|
||||
it('should use first line as title and full text as body for multi-line task', () => {
|
||||
// Given: multi-line task
|
||||
const task = 'First line title\nSecond line details\nThird line more info';
|
||||
|
||||
@ -63,7 +63,7 @@ describe('debug logging', () => {
|
||||
}
|
||||
});
|
||||
|
||||
it('should write debug log to project .takt/logs/ directory', () => {
|
||||
it('should write debug log to project .takt/runs/*/logs/ directory', () => {
|
||||
const projectDir = join(tmpdir(), 'takt-test-debug-project-' + Date.now());
|
||||
mkdirSync(projectDir, { recursive: true });
|
||||
|
||||
@ -71,7 +71,9 @@ describe('debug logging', () => {
|
||||
initDebugLogger({ enabled: true }, projectDir);
|
||||
const logFile = getDebugLogFile();
|
||||
expect(logFile).not.toBeNull();
|
||||
expect(logFile!).toContain(join(projectDir, '.takt', 'logs'));
|
||||
expect(logFile!).toContain(join(projectDir, '.takt', 'runs'));
|
||||
expect(logFile!).toContain(`${join(projectDir, '.takt', 'runs')}/`);
|
||||
expect(logFile!).toContain('/logs/');
|
||||
expect(logFile!).toMatch(/debug-.*\.log$/);
|
||||
expect(existsSync(logFile!)).toBe(true);
|
||||
} finally {
|
||||
@ -86,7 +88,8 @@ describe('debug logging', () => {
|
||||
try {
|
||||
initDebugLogger({ enabled: true }, projectDir);
|
||||
const promptsLogFile = resolvePromptsLogFilePath();
|
||||
expect(promptsLogFile).toContain(join(projectDir, '.takt', 'logs'));
|
||||
expect(promptsLogFile).toContain(join(projectDir, '.takt', 'runs'));
|
||||
expect(promptsLogFile).toContain('/logs/');
|
||||
expect(promptsLogFile).toMatch(/debug-.*-prompts\.jsonl$/);
|
||||
expect(existsSync(promptsLogFile)).toBe(true);
|
||||
} finally {
|
||||
|
||||
@ -1,6 +1,11 @@
|
||||
import { describe, it, expect, afterEach } from 'vitest';
|
||||
import { readFileSync, writeFileSync } from 'node:fs';
|
||||
import { parse as parseYaml } from 'yaml';
|
||||
import { injectProviderArgs } from '../../e2e/helpers/takt-runner.js';
|
||||
import { createIsolatedEnv } from '../../e2e/helpers/isolated-env.js';
|
||||
import {
|
||||
createIsolatedEnv,
|
||||
updateIsolatedConfig,
|
||||
} from '../../e2e/helpers/isolated-env.js';
|
||||
|
||||
describe('injectProviderArgs', () => {
|
||||
it('should prepend --provider when provider is specified', () => {
|
||||
@ -70,4 +75,112 @@ describe('createIsolatedEnv', () => {
|
||||
expect(isolated.env.GIT_CONFIG_GLOBAL).toBeDefined();
|
||||
expect(isolated.env.GIT_CONFIG_GLOBAL).toContain('takt-e2e-');
|
||||
});
|
||||
|
||||
it('should create config.yaml from E2E fixture with notification_sound timing controls', () => {
|
||||
const isolated = createIsolatedEnv();
|
||||
cleanups.push(isolated.cleanup);
|
||||
|
||||
const configRaw = readFileSync(`${isolated.taktDir}/config.yaml`, 'utf-8');
|
||||
const config = parseYaml(configRaw) as Record<string, unknown>;
|
||||
|
||||
expect(config.language).toBe('en');
|
||||
expect(config.log_level).toBe('info');
|
||||
expect(config.default_piece).toBe('default');
|
||||
expect(config.notification_sound).toBe(true);
|
||||
expect(config.notification_sound_events).toEqual({
|
||||
iteration_limit: false,
|
||||
piece_complete: false,
|
||||
piece_abort: false,
|
||||
run_complete: true,
|
||||
run_abort: true,
|
||||
});
|
||||
});
|
||||
|
||||
it('should override provider in config.yaml when TAKT_E2E_PROVIDER is set', () => {
|
||||
process.env = { ...originalEnv, TAKT_E2E_PROVIDER: 'mock' };
|
||||
const isolated = createIsolatedEnv();
|
||||
cleanups.push(isolated.cleanup);
|
||||
|
||||
const configRaw = readFileSync(`${isolated.taktDir}/config.yaml`, 'utf-8');
|
||||
const config = parseYaml(configRaw) as Record<string, unknown>;
|
||||
expect(config.provider).toBe('mock');
|
||||
});
|
||||
|
||||
it('should preserve base settings when updateIsolatedConfig applies patch', () => {
|
||||
const isolated = createIsolatedEnv();
|
||||
cleanups.push(isolated.cleanup);
|
||||
|
||||
updateIsolatedConfig(isolated.taktDir, {
|
||||
provider: 'mock',
|
||||
concurrency: 2,
|
||||
});
|
||||
|
||||
const configRaw = readFileSync(`${isolated.taktDir}/config.yaml`, 'utf-8');
|
||||
const config = parseYaml(configRaw) as Record<string, unknown>;
|
||||
|
||||
expect(config.provider).toBe('mock');
|
||||
expect(config.concurrency).toBe(2);
|
||||
expect(config.notification_sound).toBe(true);
|
||||
expect(config.notification_sound_events).toEqual({
|
||||
iteration_limit: false,
|
||||
piece_complete: false,
|
||||
piece_abort: false,
|
||||
run_complete: true,
|
||||
run_abort: true,
|
||||
});
|
||||
expect(config.language).toBe('en');
|
||||
});
|
||||
|
||||
it('should deep-merge notification_sound_events patch and preserve unspecified keys', () => {
|
||||
const isolated = createIsolatedEnv();
|
||||
cleanups.push(isolated.cleanup);
|
||||
|
||||
updateIsolatedConfig(isolated.taktDir, {
|
||||
notification_sound_events: {
|
||||
run_complete: false,
|
||||
},
|
||||
});
|
||||
|
||||
const configRaw = readFileSync(`${isolated.taktDir}/config.yaml`, 'utf-8');
|
||||
const config = parseYaml(configRaw) as Record<string, unknown>;
|
||||
|
||||
expect(config.notification_sound_events).toEqual({
|
||||
iteration_limit: false,
|
||||
piece_complete: false,
|
||||
piece_abort: false,
|
||||
run_complete: false,
|
||||
run_abort: true,
|
||||
});
|
||||
});
|
||||
|
||||
it('should throw when patch.notification_sound_events is not an object', () => {
|
||||
const isolated = createIsolatedEnv();
|
||||
cleanups.push(isolated.cleanup);
|
||||
|
||||
expect(() => {
|
||||
updateIsolatedConfig(isolated.taktDir, {
|
||||
notification_sound_events: true,
|
||||
});
|
||||
}).toThrow('Invalid notification_sound_events in patch: expected object');
|
||||
});
|
||||
|
||||
it('should throw when current config notification_sound_events is invalid', () => {
|
||||
const isolated = createIsolatedEnv();
|
||||
cleanups.push(isolated.cleanup);
|
||||
|
||||
writeFileSync(
|
||||
`${isolated.taktDir}/config.yaml`,
|
||||
[
|
||||
'language: en',
|
||||
'log_level: info',
|
||||
'default_piece: default',
|
||||
'notification_sound: true',
|
||||
'notification_sound_events: true',
|
||||
].join('\n'),
|
||||
);
|
||||
|
||||
expect(() => {
|
||||
updateIsolatedConfig(isolated.taktDir, { provider: 'mock' });
|
||||
}).toThrow('Invalid notification_sound_events in current config: expected object');
|
||||
});
|
||||
});
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user