feat: TeamLeader に refill threshold と動的パート追加を導入

TeamLeaderRunner を4モジュールに分割(execution, aggregation, common, streaming)し、
パート完了時にキュー残数が refill_threshold 以下になると追加タスクを動的に生成する
worker pool 型の実行モデルを実装。ParallelLogger に LineTimeSliceBuffer を追加し
ストリーミング出力を改善。deep-research ピースに team_leader 設定を追加。
This commit is contained in:
nrslib 2026-02-26 22:33:22 +09:00
parent deca6a2f3d
commit 798e89605d
47 changed files with 1836 additions and 262 deletions

View File

@ -1,41 +1,40 @@
# Repository Guidelines # Repository Guidelines
このドキュメントは、このリポジトリに貢献するための実務的な指針をまとめたものです。短く具体的な説明と例で、作業の迷いを減らします。
## Project Structure & Module Organization ## Project Structure & Module Organization
- 主要ソースは `src/` にあり、エントリポイントは `src/index.ts`、CLI は `src/app/cli/index.ts` です。 - `src/`: TypeScript の本体コード。CLI は `src/app/cli/`、コア実行ロジックは `src/core/`、共通機能は `src/shared/`、機能別実装は `src/features/` に配置。
- テストは `src/__tests__/` に置き、対象が明確になる名前を付けます(例: `client.test.ts`)。 - `src/__tests__/`: 単体・統合テスト(`*.test.ts`)。
- ビルド成果物は `dist/`、実行スクリプトは `bin/`、静的リソースは `resources/`、ドキュメントは `docs/` で管理します。 - `e2e/`: E2E テストと補助ヘルパー。
- 実行時の設定やキャッシュは `~/.takt/`、プロジェクト固有の設定は `.takt/` を参照します。 - `builtins/`: 組み込みピース、テンプレート、スキーマ。
- `docs/`: 設計・CLI・運用ドキュメント。
- `dist/`: ビルド成果物(生成物のため手編集しない)。
- `bin/`: CLI エントリーポイント(`takt`, `takt-dev`)を提供。
## Build, Test, and Development Commands ## Build, Test, and Development Commands
- `npm run build`: TypeScript をコンパイルして `dist/` を生成します。 - `npm install`: 依存関係をインストール。
- `npm run watch`: ソース変更を監視しながら再ビルドします。 - `npm run build`: TypeScript を `dist/` にビルドし、プロンプト・i18n・preset ファイルをコピー。
- `npm run lint`: ESLint で `src/` を解析します。 - `npm run watch`: `tsc --watch` で継続ビルド。
- `npm run test`: Vitest で全テストを実行します。 - `npm run lint`: `src/` を ESLint で検証。
- `npm run test:watch`: テストをウォッチ実行します。 - `npm test`: `vitest run` で通常テスト実行。
- `npx vitest run src/__tests__/client.test.ts`: 単体テストを個別実行する例です。 - `npm run test:e2e:mock`: モックプロバイダーで E2E 実行。
- `npm run test:e2e:all`: mock + provider E2E を連続実行。
## Coding Style & Naming Conventions ## Coding Style & Naming Conventions
- TypeScript + strict を前提に、null 安全と可読性を優先します。 - 言語は TypeScriptESM。インデントは 2 スペース、既存スタイルを維持。
- ESM 形式のため、`import` の拡張子は `.js` に固定してください。 - ファイル名は機能を表す `kebab-case` または既存準拠(例: `taskHistory.ts`)。
- 命名は camelCase関数・変数と PascalCaseクラスを採用します。 - テスト名は対象機能が分かる具体名(例: `provider-model.test.ts`)。
- 共有型は `src/types/` に整理し、既存の命名パターンに合わせます。 - Lint ルール: `@typescript-eslint/no-explicit-any` と未使用変数を厳格に検出(未使用引数は `_` 接頭辞で許容)。
- ESLint と Prettier の規約に従い、修正後は `npm run lint` を実行します。
## Testing Guidelines ## Testing Guidelines
- テストフレームワークは Vitest`vitest.config.ts`)です。 - フレームワークは Vitest。Node 環境で実行。
- 新規機能や修正には関連テストを追加します。 - 変更時は最低限 `npm test` を通し、実行経路に影響する変更は `npm run test:e2e:mock` まで確認。
- ファイル名は `<対象>.test.ts` または `<対象>.spec.ts` を使用します。 - カバレッジ取得は Vitest の V8 レポーターtext/json/htmlを使用。
- 依存が重い箇所はモックやスタブで状態を分離します。
## Commit & Pull Request Guidelines ## Commit & Pull Request Guidelines
- コミットメッセージは短い要約が中心で、日本語・英語どちらも使われています - コミットは小さく、1コミット1目的
- `fix:`, `hotfix:` などのプレフィックスや、`#32` のような Issue 参照が見られます。必要に応じて付けてください - 形式は Conventional Commits 推奨(`feat:`, `fix:`, `refactor:`, `test:`)。必要に応じて Issue 番号を付与(例: `fix: ... (#388)` / `[#367] ...`
- バージョン更新や変更履歴の更新は明示的なメッセージで行います(例: `0.5.1`, `update CHANGELOG` - PR では目的、変更点、テスト結果、影響範囲を明記。挙動変更がある場合は再現手順を添付
- PR には変更概要、テスト結果、関連 Issue を記載し、小さく分割してレビュー負荷を抑えます。UI/ログ変更がある場合はスクリーンショットやログを添付します - 大規模変更は先に Issue で合意し、関連ドキュメント(`README.md` / `docs/`)も更新する
## Security & Configuration Tips ## Security & Configuration Tips
- 脆弱性は公開 Issue ではなくメンテナへ直接報告します。 - 機密情報API キー、トークン)はコミットしない。設定は `~/.takt/config.yaml` や環境変数を使用。
- `.takt/logs/` など機密情報を含む可能性のあるファイルは共有しないでください。 - Provider や実行モード変更時は `docs/configuration.md``docs/provider-sandbox.md` を先に確認する。
- `~/.takt/config.yaml``trusted` ディレクトリは最小限にし、不要なパスは登録しないでください。
- 新しいピースを追加する場合は `~/.takt/pieces/` の既存スキーマに合わせます。

View File

@ -1,24 +1,29 @@
Execute the research according to the plan (or additional research instructions). Decompose the research plan (or additional research instructions) into independent subtasks and execute the investigation in parallel.
**What to do:** **What to do:**
1. Execute planned research items in order 1. Analyze research items from the plan and decompose them into independently executable subtasks
2. Actually investigate each item (web search, codebase search, etc.) 2. Include clear research scope and expected deliverables in each subtask's instruction
3. Report items that could not be researched as "Unable to research" 3. Include the following data saving rules and report structure in each subtask's instruction
4. Save research data to `{report_dir}/data-{topic-name}.md` as files
5. Organize results and create a report
**External data downloads:** **Subtask decomposition guidelines:**
- Actively download and utilize CSV, Excel, JSON, and other data files from public institutions and trusted sources - Prioritize topic independence (group interdependent items into the same subtask)
- Always verify source reliability before downloading (government agencies, academic institutions, official corporate sites, etc.) - Avoid spreading high-priority items (P1) across too many subtasks
- Save downloaded files to `{report_dir}/` - Balance workload evenly across subtasks
- Never download from suspicious domains or download executable files
**Data saving rules:** **Rules to include in each subtask's instruction:**
Data saving rules:
- Write data per research item to `{report_dir}/data-{topic-name}.md` - Write data per research item to `{report_dir}/data-{topic-name}.md`
- Topic names in lowercase English with hyphens (e.g., `data-market-size.md`) - Topic names in lowercase English with hyphens (e.g., `data-market-size.md`)
- Include source URLs, retrieval dates, and raw data - Include source URLs, retrieval dates, and raw data
**Report structure:** External data downloads:
- Actively download and utilize CSV, Excel, JSON, and other data files from public institutions and trusted sources
- Always verify source reliability before downloading
- Save downloaded files to `{report_dir}/`
- Never download from suspicious domains or download executable files
Report structure (per subtask):
- Results and details per research item - Results and details per research item
- Summary of key findings - Summary of key findings
- Caveats and risks - Caveats and risks

View File

@ -31,6 +31,18 @@ movements:
knowledge: research knowledge: research
instruction: research-dig instruction: research-dig
edit: true edit: true
team_leader:
max_parts: 3
part_persona: research-digger
part_edit: true
part_allowed_tools:
- Read
- Write
- Bash
- Glob
- Grep
- WebSearch
- WebFetch
allowed_tools: allowed_tools:
- Read - Read
- Write - Write

View File

@ -1,24 +1,29 @@
調査計画(または追加調査指示)に従って、調査を実行してください。 調査計画(または追加調査指示)に従って、調査項目を独立したサブタスクに分解し、並列で調査を実行してください。
**やること:** **やること:**
1. 計画の調査項目を順番に実行する 1. 計画の調査項目を分析し、独立して実行可能なサブタスクに分解する
2. 各項目について実際に調べるWeb検索、コードベース検索等 2. 各サブタスクには明確な調査範囲と期待する成果物を指示に含める
3. 調査できなかった項目は「調査不可」と報告する 3. サブタスクの指示には次のデータ保存ルールと報告構成を必ず含める
4. 調査データを `{report_dir}/data-{トピック名}.md` にファイルとして保存する
5. 結果を整理して報告を作成する
**外部データのダウンロード:** **サブタスク分解の方針:**
- 公的機関や信頼できるソースの CSV・Excel・JSON 等のデータファイルは積極的にダウンロードして活用する - 調査トピックの独立性を重視する(相互依存のある項目は同じサブタスクにまとめる)
- ダウンロード前に必ずソースの信頼性を確認する(公的機関、学術機関、企業の公式サイト等) - 優先度の高い項目P1が複数サブタスクに分散しすぎないよう注意する
- ダウンロードしたファイルは `{report_dir}/` に保存する - 各サブタスクの作業量が偏らないよう調整する
- 不審なドメインや実行可能ファイルはダウンロードしない
**データ保存ルール:** **各サブタスクの指示に含めるルール:**
データ保存ルール:
- 調査項目ごとに `{report_dir}/data-{トピック名}.md` に書き出す - 調査項目ごとに `{report_dir}/data-{トピック名}.md` に書き出す
- トピック名は英語小文字ハイフン区切り(例: `data-market-size.md` - トピック名は英語小文字ハイフン区切り(例: `data-market-size.md`
- 出典URL、取得日、生データを含める - 出典URL、取得日、生データを含める
**報告の構成:** 外部データのダウンロード:
- 公的機関や信頼できるソースの CSV・Excel・JSON 等のデータファイルは積極的にダウンロードして活用する
- ダウンロード前に必ずソースの信頼性を確認する
- ダウンロードしたファイルは `{report_dir}/` に保存する
- 不審なドメインや実行可能ファイルはダウンロードしない
報告の構成(サブタスクごと):
- 調査項目ごとの結果と詳細 - 調査項目ごとの結果と詳細
- 主要な発見のサマリー - 主要な発見のサマリー
- 注意点・リスク - 注意点・リスク

View File

@ -31,6 +31,18 @@ movements:
knowledge: research knowledge: research
instruction: research-dig instruction: research-dig
edit: true edit: true
team_leader:
max_parts: 3
part_persona: research-digger
part_edit: true
part_allowed_tools:
- Read
- Write
- Bash
- Glob
- Grep
- WebSearch
- WebFetch
allowed_tools: allowed_tools:
- Read - Read
- Write - Write

View File

@ -0,0 +1,41 @@
{
"type": "object",
"properties": {
"done": {
"type": "boolean",
"description": "True when no additional part is needed"
},
"reasoning": {
"type": "string",
"description": "Brief reason for the decision"
},
"parts": {
"type": "array",
"items": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "Unique part identifier"
},
"title": {
"type": "string",
"description": "Human-readable part title"
},
"instruction": {
"type": "string",
"description": "Instruction for the part agent"
},
"timeout_ms": {
"type": ["integer", "null"],
"description": "Optional timeout in ms"
}
},
"required": ["id", "title", "instruction", "timeout_ms"],
"additionalProperties": false
}
}
},
"required": ["done", "reasoning", "parts"],
"additionalProperties": false
}

View File

@ -0,0 +1,11 @@
# E2E Test Team Leader (Refill Threshold)
You are a team leader for refill-threshold E2E testing.
## Instructions
- In decomposition, create exactly 5 independent parts.
- Each part must create exactly one target file.
- In feedback planning, if not all required files are complete, add only the missing parts.
- Add exactly 2 more parts after the first batch so total planned parts become 7.
- Stop when all required files are completed.

View File

@ -0,0 +1,11 @@
# E2E Test Team Leader (Worker Pool)
You are a team leader for worker-pool E2E testing.
## Instructions
- Always keep each part independent and self-contained.
- Each part must create exactly one target file.
- In decomposition, return exactly 2 parts.
- In follow-up decisions, if target files are still missing, return additional parts for missing files.
- Stop only when all required files are created.

View File

@ -0,0 +1,9 @@
# E2E Test Team Leader
You are a team leader for E2E testing. Your job is to decompose a task into independent subtasks.
## Instructions
- Analyze the task and split it into 2 independent parts
- Each part should be self-contained and executable independently
- Keep instructions concise and actionable

View File

@ -0,0 +1,28 @@
name: e2e-team-leader-refill-threshold
description: E2E test for team_leader refill threshold behavior
piece_config:
provider_options:
codex:
network_access: true
opencode:
network_access: true
max_movements: 5
movements:
- name: execute
edit: true
persona: ../agents/test-team-leader-refill-threshold.md
required_permission_mode: edit
team_leader:
max_parts: 3
refill_threshold: 2
part_persona: ../agents/test-coder.md
part_edit: true
part_allowed_tools:
- Read
- Write
- Edit
instruction_template: |
{task}
rules:
- condition: Task completed
next: COMPLETE

View File

@ -0,0 +1,27 @@
name: e2e-team-leader-worker-pool
description: E2E test piece for dynamic team_leader worker-pool behavior
piece_config:
provider_options:
codex:
network_access: true
opencode:
network_access: true
max_movements: 5
movements:
- name: execute
edit: true
persona: ../agents/test-team-leader-worker-pool.md
required_permission_mode: edit
team_leader:
max_parts: 2
part_persona: ../agents/test-coder.md
part_edit: true
part_allowed_tools:
- Read
- Write
- Edit
instruction_template: |
{task}
rules:
- condition: Task completed
next: COMPLETE

View File

@ -0,0 +1,27 @@
name: e2e-team-leader
description: E2E test piece for team_leader movement
piece_config:
provider_options:
codex:
network_access: true
opencode:
network_access: true
max_movements: 5
movements:
- name: execute
edit: true
persona: ../agents/test-team-leader.md
required_permission_mode: edit
team_leader:
max_parts: 2
part_persona: ../agents/test-coder.md
part_edit: true
part_allowed_tools:
- Read
- Write
- Edit
instruction_template: |
{task}
rules:
- condition: Task completed
next: COMPLETE

View File

@ -0,0 +1,57 @@
[
{
"persona": "agents/test-team-leader-refill-threshold",
"status": "done",
"content": "decompose",
"structured_output": {
"parts": [
{ "id": "rt-1", "title": "Create rt-1.txt", "instruction": "Create rt-1.txt with content rt-1.txt", "timeout_ms": null },
{ "id": "rt-2", "title": "Create rt-2.txt", "instruction": "Create rt-2.txt with content rt-2.txt", "timeout_ms": null },
{ "id": "rt-3", "title": "Create rt-3.txt", "instruction": "Create rt-3.txt with content rt-3.txt", "timeout_ms": null }
]
}
},
{
"persona": "agents/test-team-leader-refill-threshold",
"status": "done",
"content": "add-4-5",
"structured_output": {
"done": false,
"reasoning": "Need two more tasks now",
"parts": [
{ "id": "rt-4", "title": "Create rt-4.txt", "instruction": "Create rt-4.txt with content rt-4.txt", "timeout_ms": null },
{ "id": "rt-5", "title": "Create rt-5.txt", "instruction": "Create rt-5.txt with content rt-5.txt", "timeout_ms": null }
]
}
},
{
"persona": "agents/test-team-leader-refill-threshold",
"status": "done",
"content": "add-6-7",
"structured_output": {
"done": false,
"reasoning": "Need final two tasks",
"parts": [
{ "id": "rt-6", "title": "Create rt-6.txt", "instruction": "Create rt-6.txt with content rt-6.txt", "timeout_ms": null },
{ "id": "rt-7", "title": "Create rt-7.txt", "instruction": "Create rt-7.txt with content rt-7.txt", "timeout_ms": null }
]
}
},
{
"persona": "agents/test-team-leader-refill-threshold",
"status": "done",
"content": "done",
"structured_output": {
"done": true,
"reasoning": "All tasks planned",
"parts": []
}
},
{ "persona": "agents/test-coder", "status": "done", "content": "completed rt-1" },
{ "persona": "agents/test-coder", "status": "done", "content": "completed rt-2" },
{ "persona": "agents/test-coder", "status": "done", "content": "completed rt-3" },
{ "persona": "agents/test-coder", "status": "done", "content": "completed rt-4" },
{ "persona": "agents/test-coder", "status": "done", "content": "completed rt-5" },
{ "persona": "agents/test-coder", "status": "done", "content": "completed rt-6" },
{ "persona": "agents/test-coder", "status": "done", "content": "completed rt-7" }
]

View File

@ -104,7 +104,7 @@ export function createIsolatedEnv(): IsolatedEnv {
? { ? {
...baseConfig, ...baseConfig,
provider, provider,
...(provider === 'opencode' && model ? { model } : {}), ...(model ? { model } : {}),
} }
: baseConfig; : baseConfig;
writeConfigFile(taktDir, config); writeConfigFile(taktDir, config);

View File

@ -0,0 +1,69 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { resolve, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
import { createLocalRepo, type LocalRepo } from '../helpers/test-repo';
import { runTakt } from '../helpers/takt-runner';
import { readSessionRecords } from '../helpers/session-log';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
function countPartSections(stepContent: string): number {
const matches = stepContent.match(/^## [^:\n]+: .+$/gm);
return matches?.length ?? 0;
}
describe('E2E: Team leader refill threshold', () => {
let isolatedEnv: IsolatedEnv;
let repo: LocalRepo;
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
delete isolatedEnv.env.CLAUDECODE;
repo = createLocalRepo();
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('初回5パートから追加で7パートまで拡張して完了できる', () => {
const piecePath = resolve(__dirname, '../fixtures/pieces/team-leader-refill-threshold.yaml');
const scenarioPath = resolve(__dirname, '../fixtures/scenarios/team-leader-refill-threshold.json');
const result = runTakt({
args: [
'--provider', 'mock',
'--task',
'Create exactly seven files: rt-1.txt, rt-2.txt, rt-3.txt, rt-4.txt, rt-5.txt, rt-6.txt, rt-7.txt. Each file must contain its own filename as content. Each part must create exactly one file.',
'--piece',
piecePath,
'--create-worktree',
'no',
],
cwd: repo.path,
env: {
...isolatedEnv.env,
TAKT_MOCK_SCENARIO: scenarioPath,
},
timeout: 120_000,
});
if (result.exitCode !== 0) {
console.log('=== STDOUT ===\n', result.stdout);
console.log('=== STDERR ===\n', result.stderr);
}
expect(result.exitCode).toBe(0);
expect(result.stdout).toContain('Piece completed');
const records = readSessionRecords(repo.path);
const stepComplete = records.find((r) => r.type === 'step_complete' && r.step === 'execute');
expect(stepComplete).toBeDefined();
const content = String(stepComplete?.content ?? '');
const partSectionCount = countPartSections(content);
expect(partSectionCount).toBeGreaterThanOrEqual(7);
}, 120_000);
});

View File

@ -0,0 +1,70 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { resolve, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { existsSync } from 'node:fs';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
import { createLocalRepo, type LocalRepo } from '../helpers/test-repo';
import { runTakt } from '../helpers/takt-runner';
import { readSessionRecords } from '../helpers/session-log';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
function countPartSections(stepContent: string): number {
const matches = stepContent.match(/^## [^:\n]+: .+$/gm);
return matches?.length ?? 0;
}
describe('E2E: Team leader worker-pool dynamic scheduling', () => {
let isolatedEnv: IsolatedEnv;
let repo: LocalRepo;
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
delete isolatedEnv.env.CLAUDECODE;
repo = createLocalRepo();
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('max_parts=2 でも 5タスクを順次取得して完了できる', () => {
const piecePath = resolve(__dirname, '../fixtures/pieces/team-leader-worker-pool.yaml');
const result = runTakt({
args: [
'--task',
'Create exactly five files: wp-1.txt, wp-2.txt, wp-3.txt, wp-4.txt, wp-5.txt. Each file must contain its own filename as content. Each part must create exactly one file, and you must complete all five files.',
'--piece',
piecePath,
'--create-worktree',
'no',
],
cwd: repo.path,
env: isolatedEnv.env,
timeout: 300_000,
});
if (result.exitCode !== 0) {
console.log('=== STDOUT ===\n', result.stdout);
console.log('=== STDERR ===\n', result.stderr);
}
expect(result.exitCode).toBe(0);
expect(result.stdout).toContain('Piece completed');
const records = readSessionRecords(repo.path);
const stepComplete = records.find((r) => r.type === 'step_complete' && r.step === 'execute');
expect(stepComplete).toBeDefined();
const content = String(stepComplete?.content ?? '');
const partSectionCount = countPartSections(content);
expect(partSectionCount).toBeGreaterThanOrEqual(5);
const allFilesCreated = [1, 2, 3, 4, 5]
.map((index) => existsSync(resolve(repo.path, `wp-${index}.txt`)))
.every((exists) => exists);
expect(allFilesCreated).toBe(true);
}, 300_000);
});

View File

@ -0,0 +1,83 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import { resolve, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { existsSync } from 'node:fs';
import { createIsolatedEnv, type IsolatedEnv } from '../helpers/isolated-env';
import { createLocalRepo, type LocalRepo } from '../helpers/test-repo';
import { runTakt } from '../helpers/takt-runner';
import { readSessionRecords } from '../helpers/session-log';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
/**
* E2E: Team leader movement (task decomposition + parallel part execution).
*
* Verifies that real providers can execute a piece with a `team_leader`
* movement that decomposes a task into subtasks and executes them in parallel.
*
* The piece uses `max_parts: 2` to decompose a simple file creation task
* into 2 independent parts, each writing a separate file.
*
* Run with:
* TAKT_E2E_PROVIDER=claude npx vitest run e2e/specs/team-leader.e2e.ts --config vitest.config.e2e.provider.ts
*/
describe('E2E: Team leader movement', () => {
let isolatedEnv: IsolatedEnv;
let repo: LocalRepo;
beforeEach(() => {
isolatedEnv = createIsolatedEnv();
// Unset CLAUDECODE to allow nested Claude Code sessions in E2E tests
delete isolatedEnv.env.CLAUDECODE;
repo = createLocalRepo();
});
afterEach(() => {
try { repo.cleanup(); } catch { /* best-effort */ }
try { isolatedEnv.cleanup(); } catch { /* best-effort */ }
});
it('should decompose task into parts and execute them in parallel', () => {
const piecePath = resolve(__dirname, '../fixtures/pieces/team-leader.yaml');
const result = runTakt({
args: [
'--task', 'Create two files: hello-en.txt containing "Hello World" and hello-ja.txt containing "こんにちは世界"',
'--piece', piecePath,
'--create-worktree', 'no',
],
cwd: repo.path,
env: isolatedEnv.env,
timeout: 240_000,
});
if (result.exitCode !== 0) {
console.log('=== STDOUT ===\n', result.stdout);
console.log('=== STDERR ===\n', result.stderr);
}
expect(result.exitCode).toBe(0);
expect(result.stdout).toContain('Piece completed');
// Verify session log has proper records
const records = readSessionRecords(repo.path);
const pieceComplete = records.find((r) => r.type === 'piece_complete');
expect(pieceComplete).toBeDefined();
const stepComplete = records.find((r) => r.type === 'step_complete' && r.step === 'execute');
expect(stepComplete).toBeDefined();
// The aggregated content should contain decomposition and part results
const content = stepComplete?.content as string | undefined;
expect(content).toBeDefined();
expect(content).toContain('## decomposition');
// At least one output file should exist
const enExists = existsSync(resolve(repo.path, 'hello-en.txt'));
const jaExists = existsSync(resolve(repo.path, 'hello-ja.txt'));
console.log(`=== Files created: hello-en.txt=${enExists}, hello-ja.txt=${jaExists} ===`);
expect(enExists || jaExists).toBe(true);
}, 240_000);
});

View File

@ -9,6 +9,7 @@ import {
evaluateCondition, evaluateCondition,
judgeStatus, judgeStatus,
decomposeTask, decomposeTask,
requestMoreParts,
} from '../core/piece/agent-usecases.js'; } from '../core/piece/agent-usecases.js';
vi.mock('../agents/runner.js', () => ({ vi.mock('../agents/runner.js', () => ({
@ -19,6 +20,7 @@ vi.mock('../core/piece/schema-loader.js', () => ({
loadJudgmentSchema: vi.fn(() => ({ type: 'judgment' })), loadJudgmentSchema: vi.fn(() => ({ type: 'judgment' })),
loadEvaluationSchema: vi.fn(() => ({ type: 'evaluation' })), loadEvaluationSchema: vi.fn(() => ({ type: 'evaluation' })),
loadDecompositionSchema: vi.fn((maxParts: number) => ({ type: 'decomposition', maxParts })), loadDecompositionSchema: vi.fn((maxParts: number) => ({ type: 'decomposition', maxParts })),
loadMorePartsSchema: vi.fn((maxAdditionalParts: number) => ({ type: 'more-parts', maxAdditionalParts })),
})); }));
vi.mock('../core/piece/engine/task-decomposer.js', () => ({ vi.mock('../core/piece/engine/task-decomposer.js', () => ({
@ -229,4 +231,50 @@ describe('agent-usecases', () => {
await expect(decomposeTask('instruction', 2, { cwd: '/repo' })) await expect(decomposeTask('instruction', 2, { cwd: '/repo' }))
.rejects.toThrow('Team leader failed: bad output'); .rejects.toThrow('Team leader failed: bad output');
}); });
it('requestMoreParts は構造化出力をパースして返す', async () => {
vi.mocked(runAgent).mockResolvedValue(doneResponse('x', {
done: false,
reasoning: 'Need one more part',
parts: [
{ id: 'p3', title: 'Part 3', instruction: 'Do 3', timeout_ms: null },
],
}));
const result = await requestMoreParts(
'original instruction',
[{ id: 'p1', title: 'Part 1', status: 'done', content: 'done' }],
['p1', 'p2'],
2,
{ cwd: '/repo', persona: 'team-leader' },
);
expect(result).toEqual({
done: false,
reasoning: 'Need one more part',
parts: [{ id: 'p3', title: 'Part 3', instruction: 'Do 3', timeoutMs: undefined }],
});
expect(runAgent).toHaveBeenCalledWith('team-leader', expect.stringContaining('original instruction'), expect.objectContaining({
outputSchema: { type: 'more-parts', maxAdditionalParts: 2 },
permissionMode: 'readonly',
}));
});
it('requestMoreParts は done 以外をエラーにする', async () => {
vi.mocked(runAgent).mockResolvedValue({
persona: 'team-leader',
status: 'error',
content: 'feedback failed',
error: 'timeout',
timestamp: new Date('2026-02-12T00:00:00Z'),
});
await expect(requestMoreParts(
'instruction',
[{ id: 'p1', title: 'Part 1', status: 'done', content: 'ok' }],
['p1'],
1,
{ cwd: '/repo', persona: 'team-leader' },
)).rejects.toThrow('Team leader feedback failed: timeout');
});
}); });

View File

@ -36,6 +36,7 @@ function buildTeamLeaderConfig(): PieceConfig {
teamLeader: { teamLeader: {
persona: '../personas/team-leader.md', persona: '../personas/team-leader.md',
maxParts: 3, maxParts: 3,
refillThreshold: 0,
timeoutMs: 10000, timeoutMs: 10000,
partPersona: '../personas/coder.md', partPersona: '../personas/coder.md',
partAllowedTools: ['Read', 'Edit', 'Write'], partAllowedTools: ['Read', 'Edit', 'Write'],
@ -77,14 +78,18 @@ describe('PieceEngine Integration: TeamLeaderRunner', () => {
].join('\n'), ].join('\n'),
})) }))
.mockResolvedValueOnce(makeResponse({ persona: 'coder', content: 'API done' })) .mockResolvedValueOnce(makeResponse({ persona: 'coder', content: 'API done' }))
.mockResolvedValueOnce(makeResponse({ persona: 'coder', content: 'Tests done' })); .mockResolvedValueOnce(makeResponse({ persona: 'coder', content: 'Tests done' }))
.mockResolvedValueOnce(makeResponse({
persona: 'team-leader',
structuredOutput: { done: true, reasoning: 'enough', parts: [] },
}));
vi.mocked(detectMatchedRule).mockResolvedValueOnce({ index: 0, method: 'phase1_tag' }); vi.mocked(detectMatchedRule).mockResolvedValueOnce({ index: 0, method: 'phase1_tag' });
const state = await engine.run(); const state = await engine.run();
expect(state.status).toBe('completed'); expect(state.status).toBe('completed');
expect(vi.mocked(runAgent)).toHaveBeenCalledTimes(3); expect(vi.mocked(runAgent)).toHaveBeenCalledTimes(4);
const output = state.movementOutputs.get('implement'); const output = state.movementOutputs.get('implement');
expect(output).toBeDefined(); expect(output).toBeDefined();
expect(output!.content).toContain('## decomposition'); expect(output!.content).toContain('## decomposition');
@ -108,7 +113,11 @@ describe('PieceEngine Integration: TeamLeaderRunner', () => {
].join('\n'), ].join('\n'),
})) }))
.mockResolvedValueOnce(makeResponse({ persona: 'coder', status: 'error', error: 'api failed' })) .mockResolvedValueOnce(makeResponse({ persona: 'coder', status: 'error', error: 'api failed' }))
.mockResolvedValueOnce(makeResponse({ persona: 'coder', status: 'error', error: 'test failed' })); .mockResolvedValueOnce(makeResponse({ persona: 'coder', status: 'error', error: 'test failed' }))
.mockResolvedValueOnce(makeResponse({
persona: 'team-leader',
structuredOutput: { done: true, reasoning: 'stop', parts: [] },
}));
const state = await engine.run(); const state = await engine.run();
@ -129,7 +138,11 @@ describe('PieceEngine Integration: TeamLeaderRunner', () => {
].join('\n'), ].join('\n'),
})) }))
.mockResolvedValueOnce(makeResponse({ persona: 'coder', content: 'API done' })) .mockResolvedValueOnce(makeResponse({ persona: 'coder', content: 'API done' }))
.mockResolvedValueOnce(makeResponse({ persona: 'coder', status: 'error', error: 'test failed' })); .mockResolvedValueOnce(makeResponse({ persona: 'coder', status: 'error', error: 'test failed' }))
.mockResolvedValueOnce(makeResponse({
persona: 'team-leader',
structuredOutput: { done: true, reasoning: 'stop', parts: [] },
}));
vi.mocked(detectMatchedRule).mockResolvedValueOnce({ index: 0, method: 'phase1_tag' }); vi.mocked(detectMatchedRule).mockResolvedValueOnce({ index: 0, method: 'phase1_tag' });
@ -158,7 +171,11 @@ describe('PieceEngine Integration: TeamLeaderRunner', () => {
].join('\n'), ].join('\n'),
})) }))
.mockResolvedValueOnce(makeResponse({ persona: 'coder', status: 'error', content: 'api failed from content' })) .mockResolvedValueOnce(makeResponse({ persona: 'coder', status: 'error', content: 'api failed from content' }))
.mockResolvedValueOnce(makeResponse({ persona: 'coder', content: 'Tests done' })); .mockResolvedValueOnce(makeResponse({ persona: 'coder', content: 'Tests done' }))
.mockResolvedValueOnce(makeResponse({
persona: 'team-leader',
structuredOutput: { done: true, reasoning: 'stop', parts: [] },
}));
vi.mocked(detectMatchedRule).mockResolvedValueOnce({ index: 0, method: 'phase1_tag' }); vi.mocked(detectMatchedRule).mockResolvedValueOnce({ index: 0, method: 'phase1_tag' });
@ -169,4 +186,53 @@ describe('PieceEngine Integration: TeamLeaderRunner', () => {
expect(output).toBeDefined(); expect(output).toBeDefined();
expect(output!.content).toContain('[ERROR] api failed from content'); expect(output!.content).toContain('[ERROR] api failed from content');
}); });
it('結果に応じて追加パートを生成して実行する', async () => {
const config = buildTeamLeaderConfig();
const engine = new PieceEngine(config, tmpDir, 'implement feature', { projectCwd: tmpDir });
vi.mocked(runAgent)
.mockResolvedValueOnce(makeResponse({
persona: 'team-leader',
structuredOutput: {
parts: [
{ id: 'part-1', title: 'API', instruction: 'Implement API', timeout_ms: null },
{ id: 'part-2', title: 'Test', instruction: 'Add tests', timeout_ms: null },
],
},
}))
.mockResolvedValueOnce(makeResponse({ persona: 'coder', content: 'API done' }))
.mockResolvedValueOnce(makeResponse({ persona: 'coder', content: 'Tests done' }))
.mockResolvedValueOnce(makeResponse({
persona: 'team-leader',
structuredOutput: {
done: false,
reasoning: 'Need docs',
parts: [
{ id: 'part-3', title: 'Docs', instruction: 'Write docs', timeout_ms: null },
],
},
}))
.mockResolvedValueOnce(makeResponse({ persona: 'coder', content: 'Docs done' }))
.mockResolvedValueOnce(makeResponse({
persona: 'team-leader',
structuredOutput: {
done: true,
reasoning: 'Enough',
parts: [],
},
}));
vi.mocked(detectMatchedRule).mockResolvedValueOnce({ index: 0, method: 'phase1_tag' });
const state = await engine.run();
expect(state.status).toBe('completed');
expect(vi.mocked(runAgent)).toHaveBeenCalledTimes(6);
const output = state.movementOutputs.get('implement');
expect(output).toBeDefined();
expect(output!.content).toContain('## part-3: Docs');
expect(output!.content).toContain('Docs done');
});
}); });

View File

@ -2,7 +2,7 @@
* Tests for parallel-logger module * Tests for parallel-logger module
*/ */
import { describe, it, expect, beforeEach } from 'vitest'; import { describe, it, expect, beforeEach, vi } from 'vitest';
import { ParallelLogger } from '../core/piece/index.js'; import { ParallelLogger } from '../core/piece/index.js';
import type { StreamEvent } from '../core/piece/index.js'; import type { StreamEvent } from '../core/piece/index.js';
@ -11,6 +11,7 @@ describe('ParallelLogger', () => {
let writeFn: (text: string) => void; let writeFn: (text: string) => void;
beforeEach(() => { beforeEach(() => {
vi.useRealTimers();
output = []; output = [];
writeFn = (text: string) => output.push(text); writeFn = (text: string) => output.push(text);
}); });
@ -334,6 +335,45 @@ describe('ParallelLogger', () => {
logger.flush(); logger.flush();
expect(output).toHaveLength(0); // Nothing to flush expect(output).toHaveLength(0); // Nothing to flush
}); });
it('should flush partial lines by time-slice', async () => {
vi.useFakeTimers();
const logger = new ParallelLogger({
subMovementNames: ['step-a'],
writeFn,
flushIntervalMs: 50,
minTimedFlushChars: 1,
});
const handler = logger.createStreamHandler('step-a', 0);
handler({ type: 'text', data: { text: 'partial' } } as StreamEvent);
expect(output).toHaveLength(0);
await vi.advanceTimersByTimeAsync(60);
expect(output).toHaveLength(1);
expect(output[0]).toContain('[step-a]');
expect(output[0]).toContain('partial');
});
it('should prefer boundary-aware timed flush to reduce mid-word splits', async () => {
vi.useFakeTimers();
const logger = new ParallelLogger({
subMovementNames: ['step-a'],
writeFn,
flushIntervalMs: 50,
minTimedFlushChars: 10,
});
const handler = logger.createStreamHandler('step-a', 0);
handler({ type: 'text', data: { text: 'alpha beta gamma delta' } } as StreamEvent);
await vi.advanceTimersByTimeAsync(60);
expect(output).toHaveLength(1);
expect(output[0]).toContain('alpha beta gamma ');
logger.flush();
expect(output[1]).toContain('delta');
});
}); });
describe('printSummary', () => { describe('printSummary', () => {

View File

@ -25,6 +25,26 @@ const readFileSyncMock = vi.fn((path: string) => {
}, },
}); });
} }
if (path.endsWith('more-parts.json')) {
return JSON.stringify({
type: 'object',
properties: {
done: { type: 'boolean' },
reasoning: { type: 'string' },
parts: {
type: 'array',
items: {
type: 'object',
properties: {
id: { type: 'string' },
title: { type: 'string' },
instruction: { type: 'string' },
},
},
},
},
});
}
throw new Error(`Unexpected schema path: ${path}`); throw new Error(`Unexpected schema path: ${path}`);
}); });
@ -73,4 +93,25 @@ describe('schema-loader', () => {
expect(() => loadDecompositionSchema(0)).toThrow('maxParts must be a positive integer: 0'); expect(() => loadDecompositionSchema(0)).toThrow('maxParts must be a positive integer: 0');
expect(() => loadDecompositionSchema(-1)).toThrow('maxParts must be a positive integer: -1'); expect(() => loadDecompositionSchema(-1)).toThrow('maxParts must be a positive integer: -1');
}); });
it('loadMorePartsSchema は maxItems を注入し、呼び出しごとに独立したオブジェクトを返す', async () => {
const { loadMorePartsSchema } = await import('../core/piece/schema-loader.js');
const first = loadMorePartsSchema(1);
const second = loadMorePartsSchema(4);
const firstParts = (first.properties as Record<string, unknown>).parts as Record<string, unknown>;
const secondParts = (second.properties as Record<string, unknown>).parts as Record<string, unknown>;
expect(firstParts.maxItems).toBe(1);
expect(secondParts.maxItems).toBe(4);
expect(readFileSyncMock).toHaveBeenCalledTimes(1);
});
it('loadMorePartsSchema は不正な maxAdditionalParts を拒否する', async () => {
const { loadMorePartsSchema } = await import('../core/piece/schema-loader.js');
expect(() => loadMorePartsSchema(0)).toThrow('maxAdditionalParts must be a positive integer: 0');
expect(() => loadMorePartsSchema(-1)).toThrow('maxAdditionalParts must be a positive integer: -1');
});
}); });

View File

@ -0,0 +1,88 @@
import { beforeEach, describe, expect, it, vi } from 'vitest';
import { LineTimeSliceBuffer } from '../core/piece/engine/stream-buffer.js';
describe('LineTimeSliceBuffer', () => {
beforeEach(() => {
vi.useRealTimers();
});
it('改行までを返し、残りはバッファする', () => {
const flushed: Array<{ key: string; text: string }> = [];
const buffer = new LineTimeSliceBuffer({
flushIntervalMs: 100,
onTimedFlush: (key, text) => flushed.push({ key, text }),
});
const lines = buffer.push('a', 'hello\nworld');
expect(lines).toEqual(['hello']);
expect(flushed).toEqual([]);
});
it('time-slice経過で未改行バッファをflushする', async () => {
vi.useFakeTimers();
const flushed: Array<{ key: string; text: string }> = [];
const buffer = new LineTimeSliceBuffer({
flushIntervalMs: 50,
minTimedFlushChars: 1,
onTimedFlush: (key, text) => flushed.push({ key, text }),
});
buffer.push('a', 'partial');
expect(flushed).toHaveLength(0);
await vi.advanceTimersByTimeAsync(60);
expect(flushed).toEqual([{ key: 'a', text: 'partial' }]);
});
it('time-slice flush は境界(空白/句読点)までで切る', async () => {
vi.useFakeTimers();
const flushed: Array<{ key: string; text: string }> = [];
const buffer = new LineTimeSliceBuffer({
flushIntervalMs: 50,
minTimedFlushChars: 10,
onTimedFlush: (key, text) => flushed.push({ key, text }),
});
buffer.push('a', 'hello world from buffer');
await vi.advanceTimersByTimeAsync(60);
expect(flushed).toHaveLength(1);
expect(flushed[0]).toEqual({ key: 'a', text: 'hello world from ' });
expect(buffer.flushAll()).toEqual([{ key: 'a', text: 'buffer' }]);
});
it('境界がない文字列は maxTimedBufferMs 経過後に強制flushする', async () => {
vi.useFakeTimers();
const flushed: Array<{ key: string; text: string }> = [];
const buffer = new LineTimeSliceBuffer({
flushIntervalMs: 50,
minTimedFlushChars: 100,
maxTimedBufferMs: 120,
onTimedFlush: (key, text) => flushed.push({ key, text }),
});
buffer.push('a', '高松市保育需要');
await vi.advanceTimersByTimeAsync(60);
expect(flushed).toEqual([]);
await vi.advanceTimersByTimeAsync(120);
expect(flushed).toEqual([{ key: 'a', text: '高松市保育需要' }]);
});
it('flushAllでタイマーを止めて内容を回収する', () => {
const flushed: Array<{ key: string; text: string }> = [];
const buffer = new LineTimeSliceBuffer({
flushIntervalMs: 100,
onTimedFlush: (key, text) => flushed.push({ key, text }),
});
buffer.push('a', 'x');
buffer.push('b', 'y');
expect(buffer.flushAll()).toEqual([
{ key: 'a', text: 'x' },
{ key: 'b', text: 'y' },
]);
expect(flushed).toEqual([]);
});
});

View File

@ -0,0 +1,49 @@
import { describe, it, expect } from 'vitest';
import { buildTeamLeaderAggregatedContent } from '../core/piece/engine/team-leader-aggregation.js';
import type { PartDefinition, PartResult } from '../core/models/types.js';
function makePart(id: string, title: string): PartDefinition {
return {
id,
title,
instruction: `do-${id}`,
timeoutMs: undefined,
};
}
describe('buildTeamLeaderAggregatedContent', () => {
it('decomposition とパート結果を規定フォーマットで連結する', () => {
const part1 = makePart('p1', 'API');
const part2 = makePart('p2', 'Test');
const partResults: PartResult[] = [
{
part: part1,
response: {
persona: 'execute.p1',
status: 'done',
content: 'API done',
timestamp: new Date(),
},
},
{
part: part2,
response: {
persona: 'execute.p2',
status: 'error',
content: '',
error: 'test failed',
timestamp: new Date(),
},
},
];
const content = buildTeamLeaderAggregatedContent([part1, part2], partResults);
expect(content).toContain('## decomposition');
expect(content).toContain('"id": "p1"');
expect(content).toContain('## p1: API');
expect(content).toContain('API done');
expect(content).toContain('## p2: Test');
expect(content).toContain('[ERROR] test failed');
});
});

View File

@ -0,0 +1,85 @@
import { describe, it, expect, vi } from 'vitest';
import { runTeamLeaderExecution } from '../core/piece/engine/team-leader-execution.js';
import type { PartDefinition, PartResult } from '../core/models/types.js';
function makePart(id: string): PartDefinition {
return {
id,
title: `title-${id}`,
instruction: `do-${id}`,
timeoutMs: undefined,
};
}
function makeResult(part: PartDefinition): PartResult {
return {
part,
response: {
persona: `execute.${part.id}`,
status: 'done',
content: `done ${part.id}`,
timestamp: new Date(),
},
};
}
describe('runTeamLeaderExecution', () => {
it('refill threshold 到達時に追加パートを取り込んで完了する', async () => {
const part1 = makePart('p1');
const part2 = makePart('p2');
const part3 = makePart('p3');
const requestMoreParts = vi.fn()
.mockResolvedValueOnce({
done: false,
reasoning: 'need one more',
parts: [{ id: 'p3', title: 'title-p3', instruction: 'do-p3', timeoutMs: undefined }],
})
.mockResolvedValueOnce({
done: true,
reasoning: 'enough',
parts: [],
});
const runPart = vi.fn(async (part: PartDefinition) => makeResult(part));
const result = await runTeamLeaderExecution({
initialParts: [part1, part2],
maxConcurrency: 2,
refillThreshold: 1,
runPart,
requestMoreParts,
});
expect(result.plannedParts.map((p) => p.id)).toEqual(['p1', 'p2', 'p3']);
expect(result.partResults.map((r) => r.part.id).sort()).toEqual(['p1', 'p2', 'p3']);
expect(runPart).toHaveBeenCalledTimes(3);
expect(requestMoreParts).toHaveBeenCalledTimes(2);
expect(result.partResults.some((r) => r.part.id === part3.id)).toBe(true);
});
it('重複IDだけ返された場合は追加せず終了する', async () => {
const part1 = makePart('p1');
const onPlanningNoNewParts = vi.fn();
const runPart = vi.fn(async (part: PartDefinition) => makeResult(part));
const requestMoreParts = vi.fn().mockResolvedValue({
done: false,
reasoning: 'duplicate only',
parts: [{ id: 'p1', title: 'dup', instruction: 'dup', timeoutMs: undefined }],
});
const result = await runTeamLeaderExecution({
initialParts: [part1],
maxConcurrency: 1,
refillThreshold: 0,
runPart,
requestMoreParts,
onPlanningNoNewParts,
});
expect(result.plannedParts.map((p) => p.id)).toEqual(['p1']);
expect(result.partResults).toHaveLength(1);
expect(onPlanningNoNewParts).toHaveBeenCalledTimes(1);
});
});

View File

@ -32,6 +32,20 @@ describe('team_leader schema', () => {
expect(result.success).toBe(false); expect(result.success).toBe(false);
}); });
it('refill_threshold > max_parts は拒否する', () => {
const raw = {
name: 'implement',
team_leader: {
max_parts: 2,
refill_threshold: 3,
},
instruction_template: 'decompose',
};
const result = PieceMovementRawSchema.safeParse(raw);
expect(result.success).toBe(false);
});
it('parallel と team_leader の同時指定は拒否する', () => { it('parallel と team_leader の同時指定は拒否する', () => {
const raw = { const raw = {
name: 'implement', name: 'implement',
@ -94,6 +108,7 @@ describe('normalizePieceConfig team_leader', () => {
persona: 'team-leader', persona: 'team-leader',
personaPath: undefined, personaPath: undefined,
maxParts: 2, maxParts: 2,
refillThreshold: 0,
timeoutMs: 90000, timeoutMs: 90000,
partPersona: 'coder', partPersona: 'coder',
partPersonaPath: undefined, partPersonaPath: undefined,

View File

@ -27,6 +27,8 @@ export interface TeamLeaderConfig {
personaPath?: string; personaPath?: string;
/** Maximum number of parts to run in parallel */ /** Maximum number of parts to run in parallel */
maxParts: number; maxParts: number;
/** Trigger additional planning when queued parts drop to this threshold or below */
refillThreshold: number;
/** Default timeout for parts in milliseconds */ /** Default timeout for parts in milliseconds */
timeoutMs: number; timeoutMs: number;
/** Persona reference for part agents */ /** Persona reference for part agents */

View File

@ -213,6 +213,8 @@ export const TeamLeaderConfigRawSchema = z.object({
persona: z.string().optional(), persona: z.string().optional(),
/** Maximum number of parts (must be <= 3) */ /** Maximum number of parts (must be <= 3) */
max_parts: z.number().int().positive().max(3).optional().default(3), max_parts: z.number().int().positive().max(3).optional().default(3),
/** Trigger additional planning when queue size is this value or below */
refill_threshold: z.number().int().min(0).optional().default(0),
/** Default timeout per part in milliseconds */ /** Default timeout per part in milliseconds */
timeout_ms: z.number().int().positive().optional().default(600000), timeout_ms: z.number().int().positive().optional().default(600000),
/** Persona reference for part agents */ /** Persona reference for part agents */
@ -223,7 +225,13 @@ export const TeamLeaderConfigRawSchema = z.object({
part_edit: z.boolean().optional(), part_edit: z.boolean().optional(),
/** Permission mode for part agents */ /** Permission mode for part agents */
part_permission_mode: PermissionModeSchema.optional(), part_permission_mode: PermissionModeSchema.optional(),
}); }).refine(
(data) => data.refill_threshold <= data.max_parts,
{
message: "'refill_threshold' must be less than or equal to 'max_parts'",
path: ['refill_threshold'],
},
);
/** Sub-movement schema for parallel execution */ /** Sub-movement schema for parallel execution */
export const ParallelSubMovementRawSchema = z.object({ export const ParallelSubMovementRawSchema = z.object({

View File

@ -2,7 +2,7 @@ import type { AgentResponse, PartDefinition, PieceRule, RuleMatchMethod, Languag
import { runAgent, type RunAgentOptions } from '../../agents/runner.js'; import { runAgent, type RunAgentOptions } from '../../agents/runner.js';
import { detectJudgeIndex, buildJudgePrompt } from '../../agents/judge-utils.js'; import { detectJudgeIndex, buildJudgePrompt } from '../../agents/judge-utils.js';
import { parseParts } from './engine/task-decomposer.js'; import { parseParts } from './engine/task-decomposer.js';
import { loadJudgmentSchema, loadEvaluationSchema, loadDecompositionSchema } from './schema-loader.js'; import { loadJudgmentSchema, loadEvaluationSchema, loadDecompositionSchema, loadMorePartsSchema } from './schema-loader.js';
import { detectRuleIndex } from '../../shared/utils/ruleIndex.js'; import { detectRuleIndex } from '../../shared/utils/ruleIndex.js';
import { ensureUniquePartIds, parsePartDefinitionEntry } from './part-definition-validator.js'; import { ensureUniquePartIds, parsePartDefinitionEntry } from './part-definition-validator.js';
@ -25,11 +25,18 @@ export interface EvaluateConditionOptions {
export interface DecomposeTaskOptions { export interface DecomposeTaskOptions {
cwd: string; cwd: string;
persona?: string; persona?: string;
personaPath?: string;
language?: Language; language?: Language;
model?: string; model?: string;
provider?: 'claude' | 'codex' | 'opencode' | 'mock'; provider?: 'claude' | 'codex' | 'opencode' | 'mock';
} }
export interface MorePartsResponse {
done: boolean;
reasoning: string;
parts: PartDefinition[];
}
function toPartDefinitions(raw: unknown, maxParts: number): PartDefinition[] { function toPartDefinitions(raw: unknown, maxParts: number): PartDefinition[] {
if (!Array.isArray(raw)) { if (!Array.isArray(raw)) {
throw new Error('Structured output "parts" must be an array'); throw new Error('Structured output "parts" must be an array');
@ -47,6 +54,118 @@ function toPartDefinitions(raw: unknown, maxParts: number): PartDefinition[] {
return parts; return parts;
} }
function toMorePartsResponse(raw: unknown, maxAdditionalParts: number): MorePartsResponse {
if (typeof raw !== 'object' || raw == null || Array.isArray(raw)) {
throw new Error('Structured output must be an object');
}
const payload = raw as Record<string, unknown>;
if (typeof payload.done !== 'boolean') {
throw new Error('Structured output "done" must be a boolean');
}
if (typeof payload.reasoning !== 'string') {
throw new Error('Structured output "reasoning" must be a string');
}
if (!Array.isArray(payload.parts)) {
throw new Error('Structured output "parts" must be an array');
}
if (payload.parts.length > maxAdditionalParts) {
throw new Error(`Structured output produced too many parts: ${payload.parts.length} > ${maxAdditionalParts}`);
}
const parts: PartDefinition[] = payload.parts.map((entry, index) => parsePartDefinitionEntry(entry, index));
ensureUniquePartIds(parts);
return {
done: payload.done,
reasoning: payload.reasoning,
parts,
};
}
function summarizePartContent(content: string): string {
const maxLength = 2000;
if (content.length <= maxLength) {
return content;
}
return `${content.slice(0, maxLength)}\n...[truncated]`;
}
function buildDecomposePrompt(instruction: string, maxParts: number, language?: Language): string {
if (language === 'ja') {
return [
'以下はタスク分解専用の指示です。タスクを実行せず、分解だけを行ってください。',
'- ツールは使用しない',
`- パート数は 1 以上 ${maxParts} 以下`,
'- パートは互いに独立させる',
'',
'## 元タスク',
instruction,
].join('\n');
}
return [
'This is decomposition-only planning. Do not execute the task.',
'- Do not use any tool',
`- Produce between 1 and ${maxParts} independent parts`,
'- Keep each part self-contained',
'',
'## Original Task',
instruction,
].join('\n');
}
function buildMorePartsPrompt(
originalInstruction: string,
allResults: Array<{ id: string; title: string; status: string; content: string }>,
existingIds: string[],
maxAdditionalParts: number,
language?: Language,
): string {
const resultBlock = allResults.map((result) => [
`### ${result.id}: ${result.title} (${result.status})`,
summarizePartContent(result.content),
].join('\n')).join('\n\n');
if (language === 'ja') {
return [
'以下の実行結果を見て、追加のサブタスクが必要か判断してください。',
'- ツールは使用しない',
'',
'## 元タスク',
originalInstruction,
'',
'## 完了済みパート',
resultBlock || '(なし)',
'',
'## 判断ルール',
'- 追加作業が不要なら done=true にする',
'- 追加作業が必要なら parts に新しいパートを入れる',
'- 不足が複数ある場合は、可能な限り一括で複数パートを返す',
`- 既存IDは再利用しない: ${existingIds.join(', ') || '(なし)'}`,
`- 追加できる最大数: ${maxAdditionalParts}`,
].join('\n');
}
return [
'Review completed part results and decide whether additional parts are needed.',
'- Do not use any tool',
'',
'## Original Task',
originalInstruction,
'',
'## Completed Parts',
resultBlock || '(none)',
'',
'## Decision Rules',
'- Set done=true when no additional work is required',
'- If more work is needed, provide new parts in "parts"',
'- If multiple missing tasks are known, return multiple new parts in one batch when possible',
`- Do not reuse existing IDs: ${existingIds.join(', ') || '(none)'}`,
`- Maximum additional parts: ${maxAdditionalParts}`,
].join('\n');
}
export async function executeAgent( export async function executeAgent(
persona: string | undefined, persona: string | undefined,
instruction: string, instruction: string,
@ -164,13 +283,15 @@ export async function decomposeTask(
maxParts: number, maxParts: number,
options: DecomposeTaskOptions, options: DecomposeTaskOptions,
): Promise<PartDefinition[]> { ): Promise<PartDefinition[]> {
const response = await runAgent(options.persona, instruction, { const response = await runAgent(options.persona, buildDecomposePrompt(instruction, maxParts, options.language), {
cwd: options.cwd, cwd: options.cwd,
personaPath: options.personaPath,
language: options.language, language: options.language,
model: options.model, model: options.model,
provider: options.provider, provider: options.provider,
allowedTools: [],
permissionMode: 'readonly', permissionMode: 'readonly',
maxTurns: 3, maxTurns: 2,
outputSchema: loadDecompositionSchema(maxParts), outputSchema: loadDecompositionSchema(maxParts),
}); });
@ -186,3 +307,38 @@ export async function decomposeTask(
return parseParts(response.content, maxParts); return parseParts(response.content, maxParts);
} }
export async function requestMoreParts(
originalInstruction: string,
allResults: Array<{ id: string; title: string; status: string; content: string }>,
existingIds: string[],
maxAdditionalParts: number,
options: DecomposeTaskOptions,
): Promise<MorePartsResponse> {
const prompt = buildMorePartsPrompt(
originalInstruction,
allResults,
existingIds,
maxAdditionalParts,
options.language,
);
const response = await runAgent(options.persona, prompt, {
cwd: options.cwd,
personaPath: options.personaPath,
language: options.language,
model: options.model,
provider: options.provider,
allowedTools: [],
permissionMode: 'readonly',
maxTurns: 2,
outputSchema: loadMorePartsSchema(maxAdditionalParts),
});
if (response.status !== 'done') {
const detail = response.error ?? response.content;
throw new Error(`Team leader feedback failed: ${detail}`);
}
return toMorePartsResponse(response.structuredOutput, maxAdditionalParts);
}

View File

@ -172,6 +172,76 @@ export class MovementExecutor {
}).build(); }).build();
} }
/**
* Apply shared post-execution phases (Phase 2/3 + fallback rule evaluation).
*
* This method is intentionally reusable by non-normal movement runners
* (e.g., team_leader) so rule/report behavior stays consistent.
*/
async applyPostExecutionPhases(
step: PieceMovement,
state: PieceState,
movementIteration: number,
response: AgentResponse,
updatePersonaSession: (persona: string, sessionId: string | undefined) => void,
): Promise<AgentResponse> {
let nextResponse = response;
const phaseCtx = this.deps.optionsBuilder.buildPhaseRunnerContext(
state,
nextResponse.content,
updatePersonaSession,
this.deps.onPhaseStart,
this.deps.onPhaseComplete,
);
// Phase 2: report output (resume same session, Write only)
if (step.outputContracts && step.outputContracts.length > 0) {
const reportResult = await runReportPhase(step, movementIteration, phaseCtx);
if (reportResult?.blocked) {
nextResponse = { ...nextResponse, status: 'blocked', content: reportResult.response.content };
return nextResponse;
}
}
// Phase 3: status judgment (new session, no tools, determines matched rule)
const phase3Result = needsStatusJudgmentPhase(step)
? await runStatusJudgmentPhase(step, phaseCtx)
: undefined;
if (phase3Result) {
log.debug('Rule matched (Phase 3)', {
movement: step.name,
ruleIndex: phase3Result.ruleIndex,
method: phase3Result.method,
});
nextResponse = {
...nextResponse,
matchedRuleIndex: phase3Result.ruleIndex,
matchedRuleMethod: phase3Result.method,
};
return nextResponse;
}
// No Phase 3 — use rule evaluator with Phase 1 content
const match = await detectMatchedRule(step, nextResponse.content, '', {
state,
cwd: this.deps.getCwd(),
interactive: this.deps.getInteractive(),
detectRuleIndex: this.deps.detectRuleIndex,
callAiJudge: this.deps.callAiJudge,
});
if (match) {
log.debug('Rule matched', { movement: step.name, ruleIndex: match.index, method: match.method });
nextResponse = {
...nextResponse,
matchedRuleIndex: match.index,
matchedRuleMethod: match.method,
};
}
return nextResponse;
}
/** /**
* Execute a normal (non-parallel) movement through all 3 phases. * Execute a normal (non-parallel) movement through all 3 phases.
* *
@ -205,44 +275,13 @@ export class MovementExecutor {
let response = await executeAgent(step.persona, instruction, agentOptions); let response = await executeAgent(step.persona, instruction, agentOptions);
updatePersonaSession(sessionKey, response.sessionId); updatePersonaSession(sessionKey, response.sessionId);
this.deps.onPhaseComplete?.(step, 1, 'execute', response.content, response.status, response.error); this.deps.onPhaseComplete?.(step, 1, 'execute', response.content, response.status, response.error);
response = await this.applyPostExecutionPhases(
const phaseCtx = this.deps.optionsBuilder.buildPhaseRunnerContext(state, response.content, updatePersonaSession, this.deps.onPhaseStart, this.deps.onPhaseComplete); step,
state,
// Phase 2: report output (resume same session, Write only) movementIteration,
// When report phase returns blocked, propagate to PieceEngine's handleBlocked flow response,
if (step.outputContracts && step.outputContracts.length > 0) { updatePersonaSession,
const reportResult = await runReportPhase(step, movementIteration, phaseCtx); );
if (reportResult?.blocked) {
response = { ...response, status: 'blocked', content: reportResult.response.content };
state.movementOutputs.set(step.name, response);
state.lastOutput = response;
return { response, instruction };
}
}
// Phase 3: status judgment (new session, no tools, determines matched rule)
const phase3Result = needsStatusJudgmentPhase(step)
? await runStatusJudgmentPhase(step, phaseCtx)
: undefined;
if (phase3Result) {
// Phase 3 already determined the matched rule — use its result directly
log.debug('Rule matched (Phase 3)', { movement: step.name, ruleIndex: phase3Result.ruleIndex, method: phase3Result.method });
response = { ...response, matchedRuleIndex: phase3Result.ruleIndex, matchedRuleMethod: phase3Result.method };
} else {
// No Phase 3 — use rule evaluator with Phase 1 content
const match = await detectMatchedRule(step, response.content, '', {
state,
cwd: this.deps.getCwd(),
interactive: this.deps.getInteractive(),
detectRuleIndex: this.deps.detectRuleIndex,
callAiJudge: this.deps.callAiJudge,
});
if (match) {
log.debug('Rule matched', { movement: step.name, ruleIndex: match.index, method: match.method });
response = { ...response, matchedRuleIndex: match.index, matchedRuleMethod: match.method };
}
}
state.movementOutputs.set(step.name, response); state.movementOutputs.set(step.name, response);
state.lastOutput = response; state.lastOutput = response;

View File

@ -5,27 +5,22 @@ import type {
PartDefinition, PartDefinition,
PartResult, PartResult,
} from '../../models/types.js'; } from '../../models/types.js';
import { decomposeTask, executeAgent } from '../agent-usecases.js'; import { decomposeTask, executeAgent, requestMoreParts } from '../agent-usecases.js';
import { detectMatchedRule } from '../evaluation/index.js';
import { buildSessionKey } from '../session-key.js'; import { buildSessionKey } from '../session-key.js';
import { ParallelLogger } from './parallel-logger.js'; import { ParallelLogger } from './parallel-logger.js';
import { incrementMovementIteration } from './state-manager.js'; import { incrementMovementIteration } from './state-manager.js';
import { buildAbortSignal } from './abort-signal.js'; import { buildAbortSignal } from './abort-signal.js';
import { createLogger, getErrorMessage } from '../../../shared/utils/index.js'; import { createLogger, getErrorMessage } from '../../../shared/utils/index.js';
import { runTeamLeaderExecution } from './team-leader-execution.js';
import { buildTeamLeaderAggregatedContent } from './team-leader-aggregation.js';
import { createPartMovement, resolvePartErrorDetail, summarizeParts } from './team-leader-common.js';
import { buildTeamLeaderParallelLoggerOptions, emitTeamLeaderProgressHint } from './team-leader-streaming.js';
import type { OptionsBuilder } from './OptionsBuilder.js'; import type { OptionsBuilder } from './OptionsBuilder.js';
import type { MovementExecutor } from './MovementExecutor.js'; import type { MovementExecutor } from './MovementExecutor.js';
import type { PieceEngineOptions, PhaseName } from '../types.js'; import type { PieceEngineOptions, PhaseName } from '../types.js';
import type { ParallelLoggerOptions } from './parallel-logger.js';
const log = createLogger('team-leader-runner'); const log = createLogger('team-leader-runner');
const MAX_TOTAL_PARTS = 20;
function resolvePartErrorDetail(partResult: PartResult): string {
const detail = partResult.response.error ?? partResult.response.content;
if (!detail) {
throw new Error(`Part "${partResult.part.id}" failed without error detail`);
}
return detail;
}
export interface TeamLeaderRunnerDeps { export interface TeamLeaderRunnerDeps {
readonly optionsBuilder: OptionsBuilder; readonly optionsBuilder: OptionsBuilder;
@ -43,29 +38,6 @@ export interface TeamLeaderRunnerDeps {
readonly onPhaseComplete?: (step: PieceMovement, phase: 1 | 2 | 3, phaseName: PhaseName, content: string, status: string, error?: string) => void; readonly onPhaseComplete?: (step: PieceMovement, phase: 1 | 2 | 3, phaseName: PhaseName, content: string, status: string, error?: string) => void;
} }
function createPartMovement(step: PieceMovement, part: PartDefinition): PieceMovement {
if (!step.teamLeader) {
throw new Error(`Movement "${step.name}" has no teamLeader configuration`);
}
return {
name: `${step.name}.${part.id}`,
description: part.title,
persona: step.teamLeader.partPersona ?? step.persona,
personaPath: step.teamLeader.partPersonaPath ?? step.personaPath,
personaDisplayName: `${step.name}:${part.id}`,
session: 'refresh',
allowedTools: step.teamLeader.partAllowedTools ?? step.allowedTools,
mcpServers: step.mcpServers,
provider: step.provider,
model: step.model,
requiredPermissionMode: step.teamLeader.partPermissionMode ?? step.requiredPermissionMode,
edit: step.teamLeader.partEdit ?? step.edit,
instructionTemplate: part.instruction,
passPreviousResponse: false,
};
}
export class TeamLeaderRunner { export class TeamLeaderRunner {
constructor( constructor(
private readonly deps: TeamLeaderRunnerDeps, private readonly deps: TeamLeaderRunnerDeps,
@ -89,6 +61,8 @@ export class TeamLeaderRunner {
persona: teamLeaderConfig.persona ?? step.persona, persona: teamLeaderConfig.persona ?? step.persona,
personaPath: teamLeaderConfig.personaPath ?? step.personaPath, personaPath: teamLeaderConfig.personaPath ?? step.personaPath,
}; };
const leaderProvider = leaderStep.provider ?? this.deps.engineOptions.provider;
const leaderModel = leaderStep.model ?? this.deps.engineOptions.model;
const instruction = this.deps.movementExecutor.buildInstruction( const instruction = this.deps.movementExecutor.buildInstruction(
leaderStep, leaderStep,
movementIteration, movementIteration,
@ -97,12 +71,14 @@ export class TeamLeaderRunner {
maxMovements, maxMovements,
); );
emitTeamLeaderProgressHint(this.deps.engineOptions, 'decompose');
this.deps.onPhaseStart?.(leaderStep, 1, 'execute', instruction); this.deps.onPhaseStart?.(leaderStep, 1, 'execute', instruction);
const parts = await decomposeTask(instruction, teamLeaderConfig.maxParts, { const parts = await decomposeTask(instruction, teamLeaderConfig.maxParts, {
cwd: this.deps.getCwd(), cwd: this.deps.getCwd(),
persona: leaderStep.persona, persona: leaderStep.persona,
model: leaderStep.model, personaPath: leaderStep.personaPath,
provider: leaderStep.provider, model: leaderModel,
provider: leaderProvider,
}); });
const leaderResponse: AgentResponse = { const leaderResponse: AgentResponse = {
persona: leaderStep.persona ?? leaderStep.name, persona: leaderStep.persona ?? leaderStep.name,
@ -116,49 +92,97 @@ export class TeamLeaderRunner {
partCount: parts.length, partCount: parts.length,
partIds: parts.map((part) => part.id), partIds: parts.map((part) => part.id),
}); });
log.info('Team leader decomposition completed', {
movement: step.name,
partCount: parts.length,
parts: summarizeParts(parts),
});
const parallelLogger = this.deps.engineOptions.onStream const parallelLogger = this.deps.engineOptions.onStream
? new ParallelLogger(this.buildParallelLoggerOptions( ? new ParallelLogger(buildTeamLeaderParallelLoggerOptions(
step.name, this.deps.engineOptions,
movementIteration, step.name,
parts.map((part) => part.id), movementIteration,
state.iteration, parts.map((part) => part.id),
maxMovements, state.iteration,
)) maxMovements,
))
: undefined; : undefined;
const settled = await Promise.allSettled( const { plannedParts, partResults } = await runTeamLeaderExecution({
parts.map((part, index) => this.runSinglePart( initialParts: parts,
maxConcurrency: teamLeaderConfig.maxParts,
refillThreshold: teamLeaderConfig.refillThreshold,
maxTotalParts: MAX_TOTAL_PARTS,
onPartQueued: (part) => {
parallelLogger?.addSubMovement(part.id);
},
onPartCompleted: (result) => {
state.movementOutputs.set(result.response.persona, result.response);
},
onPlanningDone: ({ reason, plannedParts: plannedCount, completedParts }) => {
log.info('Team leader marked planning as done', {
movement: step.name,
plannedParts: plannedCount,
completedParts,
reasoning: reason,
});
},
onPlanningNoNewParts: ({ reason, plannedParts: plannedCount, completedParts }) => {
log.info('Team leader returned no new unique parts; stop planning', {
movement: step.name,
plannedParts: plannedCount,
completedParts,
reasoning: reason,
});
},
onPartsAdded: ({ parts: addedParts, reason, totalPlanned }) => {
log.info('Team leader added new parts', {
movement: step.name,
addedCount: addedParts.length,
totalPlannedAfterAdd: totalPlanned,
parts: summarizeParts(addedParts),
reasoning: reason,
});
},
onPlanningError: (error) => {
log.info('Team leader feedback failed; stop adding new parts', {
movement: step.name,
detail: getErrorMessage(error),
});
},
requestMoreParts: async ({ partResults: currentResults, scheduledIds, remainingPartBudget }) => {
emitTeamLeaderProgressHint(this.deps.engineOptions, 'feedback');
return requestMoreParts(
instruction,
currentResults.map((result) => ({
id: result.part.id,
title: result.part.title,
status: result.response.status,
content: result.response.status === 'error'
? `[ERROR] ${resolvePartErrorDetail(result)}`
: result.response.content,
})),
scheduledIds,
remainingPartBudget,
{
cwd: this.deps.getCwd(),
persona: leaderStep.persona,
personaPath: leaderStep.personaPath,
language: this.deps.engineOptions.language,
model: leaderModel,
provider: leaderProvider,
},
);
},
runPart: async (part, partIndex) => this.runSinglePart(
step, step,
part, part,
index, partIndex,
teamLeaderConfig.timeoutMs, teamLeaderConfig.timeoutMs,
updatePersonaSession, updatePersonaSession,
parallelLogger, parallelLogger,
)), ).catch((error) => this.buildErrorPartResult(step, part, error)),
);
const partResults: PartResult[] = settled.map((result, index) => {
const part = parts[index];
if (!part) {
throw new Error(`Missing part at index ${index}`);
}
if (result.status === 'fulfilled') {
state.movementOutputs.set(result.value.response.persona, result.value.response);
return result.value;
}
const errorMsg = getErrorMessage(result.reason);
const errorResponse: AgentResponse = {
persona: `${step.name}.${part.id}`,
status: 'error',
content: '',
timestamp: new Date(),
error: errorMsg,
};
state.movementOutputs.set(errorResponse.persona, errorResponse);
return { part, response: errorResponse };
}); });
const allFailed = partResults.every((result) => result.response.status === 'error'); const allFailed = partResults.every((result) => result.response.status === 'error');
@ -174,34 +198,23 @@ export class TeamLeaderRunner {
); );
} }
const aggregatedContent = [ const aggregatedContent = buildTeamLeaderAggregatedContent(plannedParts, partResults);
'## decomposition',
leaderResponse.content,
...partResults.map((result) => [
`## ${result.part.id}: ${result.part.title}`,
result.response.status === 'error'
? `[ERROR] ${resolvePartErrorDetail(result)}`
: result.response.content,
].join('\n')),
].join('\n\n---\n\n');
const ruleCtx = { let aggregatedResponse: AgentResponse = {
state,
cwd: this.deps.getCwd(),
interactive: this.deps.getInteractive(),
detectRuleIndex: this.deps.detectRuleIndex,
callAiJudge: this.deps.callAiJudge,
};
const match = await detectMatchedRule(step, aggregatedContent, '', ruleCtx);
const aggregatedResponse: AgentResponse = {
persona: step.name, persona: step.name,
status: 'done', status: 'done',
content: aggregatedContent, content: aggregatedContent,
timestamp: new Date(), timestamp: new Date(),
...(match && { matchedRuleIndex: match.index, matchedRuleMethod: match.method }),
}; };
aggregatedResponse = await this.deps.movementExecutor.applyPostExecutionPhases(
step,
state,
movementIteration,
aggregatedResponse,
updatePersonaSession,
);
state.movementOutputs.set(step.name, aggregatedResponse); state.movementOutputs.set(step.name, aggregatedResponse);
state.lastOutput = aggregatedResponse; state.lastOutput = aggregatedResponse;
this.deps.movementExecutor.persistPreviousResponseSnapshot( this.deps.movementExecutor.persistPreviousResponseSnapshot(
@ -246,29 +259,20 @@ export class TeamLeaderRunner {
} }
} }
private buildParallelLoggerOptions( private buildErrorPartResult(
movementName: string, step: PieceMovement,
movementIteration: number, part: PartDefinition,
subMovementNames: string[], error: unknown,
iteration: number, ): PartResult {
maxMovements: number, const errorMsg = getErrorMessage(error);
): ParallelLoggerOptions { const errorResponse: AgentResponse = {
const options: ParallelLoggerOptions = { persona: `${step.name}.${part.id}`,
subMovementNames, status: 'error',
parentOnStream: this.deps.engineOptions.onStream, content: '',
progressInfo: { iteration, maxMovements }, timestamp: new Date(),
error: errorMsg,
}; };
return { part, response: errorResponse };
if (this.deps.engineOptions.taskPrefix != null && this.deps.engineOptions.taskColorIndex != null) {
return {
...options,
taskLabel: this.deps.engineOptions.taskPrefix,
taskColorIndex: this.deps.engineOptions.taskColorIndex,
parentMovementName: movementName,
movementIteration,
};
}
return options;
} }
} }

View File

@ -8,6 +8,7 @@
import type { StreamCallback, StreamEvent } from '../types.js'; import type { StreamCallback, StreamEvent } from '../types.js';
import { stripAnsi } from '../../../shared/utils/text.js'; import { stripAnsi } from '../../../shared/utils/text.js';
import { LineTimeSliceBuffer } from './stream-buffer.js';
/** ANSI color codes for sub-movement prefixes (cycled in order) */ /** ANSI color codes for sub-movement prefixes (cycled in order) */
const COLORS = ['\x1b[36m', '\x1b[33m', '\x1b[35m', '\x1b[32m'] as const; // cyan, yellow, magenta, green const COLORS = ['\x1b[36m', '\x1b[33m', '\x1b[35m', '\x1b[32m'] as const; // cyan, yellow, magenta, green
@ -38,6 +39,12 @@ export interface ParallelLoggerOptions {
parentMovementName?: string; parentMovementName?: string;
/** Parent movement iteration count for rich parallel prefix display */ /** Parent movement iteration count for rich parallel prefix display */
movementIteration?: number; movementIteration?: number;
/** Flush interval for partial text buffers in milliseconds */
flushIntervalMs?: number;
/** Minimum buffered chars before timed flush is allowed */
minTimedFlushChars?: number;
/** Maximum wait time for timed flush even without boundary */
maxTimedBufferMs?: number;
} }
/** /**
@ -49,33 +56,57 @@ export interface ParallelLoggerOptions {
* - Delegate init/result/error events to the parent callback * - Delegate init/result/error events to the parent callback
*/ */
export class ParallelLogger { export class ParallelLogger {
private readonly maxNameLength: number; private static readonly DEFAULT_FLUSH_INTERVAL_MS = 300;
private readonly lineBuffers: Map<string, string> = new Map(); private maxNameLength: number;
private readonly subMovementNames: string[];
private readonly lineBuffer: LineTimeSliceBuffer;
private readonly parentOnStream?: StreamCallback; private readonly parentOnStream?: StreamCallback;
private readonly writeFn: (text: string) => void; private readonly writeFn: (text: string) => void;
private readonly progressInfo?: ParallelProgressInfo; private readonly progressInfo?: ParallelProgressInfo;
private readonly totalSubMovements: number; private totalSubMovements: number;
private readonly taskLabel?: string; private readonly taskLabel?: string;
private readonly taskColorIndex?: number; private readonly taskColorIndex?: number;
private readonly parentMovementName?: string; private readonly parentMovementName?: string;
private readonly movementIteration?: number; private readonly movementIteration?: number;
private readonly flushIntervalMs: number;
constructor(options: ParallelLoggerOptions) { constructor(options: ParallelLoggerOptions) {
this.maxNameLength = Math.max(...options.subMovementNames.map((n) => n.length)); this.subMovementNames = [...options.subMovementNames];
this.maxNameLength = Math.max(...this.subMovementNames.map((n) => n.length));
this.parentOnStream = options.parentOnStream; this.parentOnStream = options.parentOnStream;
this.writeFn = options.writeFn ?? ((text: string) => process.stdout.write(text)); this.writeFn = options.writeFn ?? ((text: string) => process.stdout.write(text));
this.progressInfo = options.progressInfo; this.progressInfo = options.progressInfo;
this.totalSubMovements = options.subMovementNames.length; this.totalSubMovements = this.subMovementNames.length;
this.taskLabel = options.taskLabel ? options.taskLabel.slice(0, 4) : undefined; this.taskLabel = options.taskLabel ? options.taskLabel.slice(0, 4) : undefined;
this.taskColorIndex = options.taskColorIndex; this.taskColorIndex = options.taskColorIndex;
this.parentMovementName = options.parentMovementName; this.parentMovementName = options.parentMovementName;
this.movementIteration = options.movementIteration; this.movementIteration = options.movementIteration;
this.flushIntervalMs = options.flushIntervalMs ?? ParallelLogger.DEFAULT_FLUSH_INTERVAL_MS;
this.lineBuffer = new LineTimeSliceBuffer({
flushIntervalMs: this.flushIntervalMs,
onTimedFlush: (name, text) => this.flushPartialLine(name, text),
minTimedFlushChars: options.minTimedFlushChars,
maxTimedBufferMs: options.maxTimedBufferMs,
});
for (const name of options.subMovementNames) { for (const name of this.subMovementNames) {
this.lineBuffers.set(name, ''); this.lineBuffer.addKey(name);
} }
} }
addSubMovement(name: string): number {
const existingIndex = this.subMovementNames.indexOf(name);
if (existingIndex >= 0) {
return existingIndex;
}
this.subMovementNames.push(name);
this.totalSubMovements = this.subMovementNames.length;
this.maxNameLength = Math.max(this.maxNameLength, name.length);
this.lineBuffer.addKey(name);
return this.subMovementNames.length - 1;
}
/** /**
* Build the colored prefix string for a sub-movement. * Build the colored prefix string for a sub-movement.
* Format: `\x1b[COLORm[name](iteration/max) step index/total\x1b[0m` + padding spaces * Format: `\x1b[COLORm[name](iteration/max) step index/total\x1b[0m` + padding spaces
@ -139,13 +170,7 @@ export class ParallelLogger {
* Empty lines get no prefix per spec. * Empty lines get no prefix per spec.
*/ */
private handleTextEvent(name: string, prefix: string, text: string): void { private handleTextEvent(name: string, prefix: string, text: string): void {
const buffer = this.lineBuffers.get(name) ?? ''; const parts = this.lineBuffer.push(name, stripAnsi(text));
const combined = buffer + stripAnsi(text);
const parts = combined.split('\n');
// Last part is incomplete (no trailing newline) — keep in buffer
const remainder = parts.pop() ?? '';
this.lineBuffers.set(name, remainder);
// Output all complete lines // Output all complete lines
for (const line of parts) { for (const line of parts) {
@ -157,6 +182,12 @@ export class ParallelLogger {
} }
} }
private flushPartialLine(name: string, text: string): void {
const index = this.subMovementNames.indexOf(name);
const prefix = this.buildPrefix(name, index < 0 ? 0 : index);
this.writeFn(`${prefix}${text}\n`);
}
/** /**
* Handle block events (tool_use, tool_result, tool_output, thinking). * Handle block events (tool_use, tool_result, tool_output, thinking).
* Output with prefix, splitting multi-line content. * Output with prefix, splitting multi-line content.
@ -207,17 +238,9 @@ export class ParallelLogger {
* Call after all sub-movements complete to output any trailing partial lines. * Call after all sub-movements complete to output any trailing partial lines.
*/ */
flush(): void { flush(): void {
// Build prefixes for flush — need index mapping const pending = this.lineBuffer.flushAll();
// Since we don't store index, iterate lineBuffers in insertion order for (const { key, text } of pending) {
// (Map preserves insertion order, matching subMovementNames order) this.flushPartialLine(key, text);
let index = 0;
for (const [name, buffer] of this.lineBuffers) {
if (buffer !== '') {
const prefix = this.buildPrefix(name, index);
this.writeFn(`${prefix}${buffer}\n`);
this.lineBuffers.set(name, '');
}
index++;
} }
} }

View File

@ -0,0 +1,144 @@
export interface LineTimeSliceBufferOptions {
flushIntervalMs: number;
onTimedFlush: (key: string, text: string) => void;
minTimedFlushChars?: number;
maxTimedBufferMs?: number;
}
export class LineTimeSliceBuffer {
private static readonly DEFAULT_MIN_TIMED_FLUSH_CHARS = 24;
private static readonly DEFAULT_MAX_TIMED_BUFFER_MS = 1500;
private readonly flushIntervalMs: number;
private readonly onTimedFlush: (key: string, text: string) => void;
private readonly minTimedFlushChars: number;
private readonly maxTimedBufferMs: number;
private readonly buffers: Map<string, string> = new Map();
private readonly timers: Map<string, NodeJS.Timeout> = new Map();
private readonly pendingSince: Map<string, number> = new Map();
constructor(options: LineTimeSliceBufferOptions) {
this.flushIntervalMs = options.flushIntervalMs;
this.onTimedFlush = options.onTimedFlush;
this.minTimedFlushChars = options.minTimedFlushChars ?? LineTimeSliceBuffer.DEFAULT_MIN_TIMED_FLUSH_CHARS;
this.maxTimedBufferMs = options.maxTimedBufferMs ?? LineTimeSliceBuffer.DEFAULT_MAX_TIMED_BUFFER_MS;
}
addKey(key: string): void {
if (!this.buffers.has(key)) {
this.buffers.set(key, '');
}
}
push(key: string, text: string): string[] {
this.addKey(key);
const buffer = this.buffers.get(key) ?? '';
const combined = buffer + text;
const parts = combined.split('\n');
const remainder = parts.pop() ?? '';
this.buffers.set(key, remainder);
if (remainder === '') {
this.pendingSince.delete(key);
this.clearTimer(key);
} else {
if (!this.pendingSince.has(key)) {
this.pendingSince.set(key, Date.now());
}
this.scheduleTimer(key);
}
return parts;
}
flushKey(key: string): string | undefined {
this.clearTimer(key);
this.pendingSince.delete(key);
const buffer = this.buffers.get(key) ?? '';
if (buffer === '') {
return undefined;
}
this.buffers.set(key, '');
return buffer;
}
flushAll(): Array<{ key: string; text: string }> {
const result: Array<{ key: string; text: string }> = [];
for (const key of this.buffers.keys()) {
const text = this.flushKey(key);
if (text !== undefined) {
result.push({ key, text });
}
}
return result;
}
private scheduleTimer(key: string): void {
this.clearTimer(key);
const timer = setTimeout(() => {
this.timers.delete(key);
const text = this.flushTimedKey(key);
if (text !== undefined) {
this.onTimedFlush(key, text);
}
}, this.flushIntervalMs);
this.timers.set(key, timer);
}
private flushTimedKey(key: string): string | undefined {
const buffer = this.buffers.get(key) ?? '';
if (buffer === '') {
this.pendingSince.delete(key);
return undefined;
}
const startedAt = this.pendingSince.get(key) ?? Date.now();
const elapsed = Date.now() - startedAt;
const canForceFlush = elapsed >= this.maxTimedBufferMs;
if (!canForceFlush && buffer.length < this.minTimedFlushChars) {
this.scheduleTimer(key);
return undefined;
}
const boundaryIndex = this.findBoundaryIndex(buffer);
const flushIndex = boundaryIndex > 0
? boundaryIndex
: buffer.length;
const flushText = buffer.slice(0, flushIndex);
const remainder = buffer.slice(flushIndex);
this.buffers.set(key, remainder);
if (remainder === '') {
this.pendingSince.delete(key);
} else {
this.scheduleTimer(key);
}
return flushText;
}
private findBoundaryIndex(text: string): number {
let lastIndex = -1;
for (let i = 0; i < text.length; i += 1) {
const ch = text.charAt(i);
if (this.isBoundary(ch)) {
lastIndex = i;
}
}
return lastIndex + 1;
}
private isBoundary(ch: string): boolean {
return /\s|[,.!?;:\[\]{}]/u.test(ch);
}
private clearTimer(key: string): void {
const timer = this.timers.get(key);
if (!timer) {
return;
}
clearTimeout(timer);
this.timers.delete(key);
}
}

View File

@ -0,0 +1,18 @@
import type { PartDefinition, PartResult } from '../../models/types.js';
import { resolvePartErrorDetail } from './team-leader-common.js';
export function buildTeamLeaderAggregatedContent(
plannedParts: PartDefinition[],
partResults: PartResult[],
): string {
return [
'## decomposition',
JSON.stringify({ parts: plannedParts }, null, 2),
...partResults.map((result) => [
`## ${result.part.id}: ${result.part.title}`,
result.response.status === 'error'
? `[ERROR] ${resolvePartErrorDetail(result)}`
: result.response.content,
].join('\n')),
].join('\n\n---\n\n');
}

View File

@ -0,0 +1,40 @@
import type {
PartDefinition,
PartResult,
PieceMovement,
} from '../../models/types.js';
export function summarizeParts(parts: PartDefinition[]): Array<{ id: string; title: string }> {
return parts.map((part) => ({ id: part.id, title: part.title }));
}
export function resolvePartErrorDetail(partResult: PartResult): string {
const detail = partResult.response.error ?? partResult.response.content;
if (!detail) {
throw new Error(`Part "${partResult.part.id}" failed without error detail`);
}
return detail;
}
export function createPartMovement(step: PieceMovement, part: PartDefinition): PieceMovement {
if (!step.teamLeader) {
throw new Error(`Movement "${step.name}" has no teamLeader configuration`);
}
return {
name: `${step.name}.${part.id}`,
description: part.title,
persona: step.teamLeader.partPersona ?? step.persona,
personaPath: step.teamLeader.partPersonaPath ?? step.personaPath,
personaDisplayName: `${step.name}:${part.id}`,
session: 'refresh',
allowedTools: step.teamLeader.partAllowedTools ?? step.allowedTools,
mcpServers: step.mcpServers,
provider: step.provider,
model: step.model,
requiredPermissionMode: step.teamLeader.partPermissionMode ?? step.requiredPermissionMode,
edit: step.teamLeader.partEdit ?? step.edit,
instructionTemplate: part.instruction,
passPreviousResponse: false,
};
}

View File

@ -0,0 +1,143 @@
import type { MorePartsResponse } from '../agent-usecases.js';
import type { PartDefinition, PartResult } from '../../models/types.js';
const DEFAULT_MAX_TOTAL_PARTS = 20;
export interface TeamLeaderExecutionOptions {
initialParts: PartDefinition[];
maxConcurrency: number;
refillThreshold: number;
maxTotalParts?: number;
runPart: (part: PartDefinition, partIndex: number) => Promise<PartResult>;
requestMoreParts: (
args: {
partResults: PartResult[];
scheduledIds: string[];
remainingPartBudget: number;
}
) => Promise<MorePartsResponse>;
onPartQueued?: (part: PartDefinition, partIndex: number) => void;
onPartCompleted?: (result: PartResult) => void;
onPlanningDone?: (feedback: { reason: string; plannedParts: number; completedParts: number }) => void;
onPlanningNoNewParts?: (feedback: { reason: string; plannedParts: number; completedParts: number }) => void;
onPartsAdded?: (feedback: { parts: PartDefinition[]; reason: string; totalPlanned: number }) => void;
onPlanningError?: (error: unknown) => void;
}
interface RunningPart {
partId: string;
result: PartResult;
}
export interface TeamLeaderExecutionResult {
plannedParts: PartDefinition[];
partResults: PartResult[];
}
export async function runTeamLeaderExecution(
options: TeamLeaderExecutionOptions,
): Promise<TeamLeaderExecutionResult> {
const maxTotalParts = options.maxTotalParts ?? DEFAULT_MAX_TOTAL_PARTS;
const queue: PartDefinition[] = [...options.initialParts];
const plannedParts: PartDefinition[] = [...options.initialParts];
const partResults: PartResult[] = [];
const running = new Map<string, Promise<RunningPart>>();
const scheduledIds = new Set(options.initialParts.map((part) => part.id));
let nextPartIndex = 0;
let leaderDone = false;
const tryPlanMoreParts = async (): Promise<void> => {
if (leaderDone) {
return;
}
const remainingPartBudget = maxTotalParts - plannedParts.length;
if (remainingPartBudget <= 0) {
leaderDone = true;
return;
}
try {
const feedback = await options.requestMoreParts({
partResults,
scheduledIds: [...scheduledIds],
remainingPartBudget,
});
if (feedback.done) {
options.onPlanningDone?.({
reason: feedback.reasoning,
plannedParts: plannedParts.length,
completedParts: partResults.length,
});
leaderDone = true;
return;
}
const newParts: PartDefinition[] = [];
for (const newPart of feedback.parts) {
if (scheduledIds.has(newPart.id)) {
continue;
}
scheduledIds.add(newPart.id);
newParts.push(newPart);
}
if (newParts.length === 0) {
options.onPlanningNoNewParts?.({
reason: feedback.reasoning,
plannedParts: plannedParts.length,
completedParts: partResults.length,
});
leaderDone = true;
return;
}
plannedParts.push(...newParts);
queue.push(...newParts);
options.onPartsAdded?.({
parts: newParts,
reason: feedback.reasoning,
totalPlanned: plannedParts.length,
});
} catch (error) {
options.onPlanningError?.(error);
leaderDone = true;
}
};
while (queue.length > 0 || running.size > 0 || !leaderDone) {
while (queue.length > 0 && running.size < options.maxConcurrency) {
const part = queue.shift();
if (!part) {
break;
}
const partIndex = nextPartIndex;
nextPartIndex += 1;
options.onPartQueued?.(part, partIndex);
const runningPart = options.runPart(part, partIndex).then((result) => ({ partId: part.id, result }));
running.set(part.id, runningPart);
}
if (running.size > 0) {
const completed = await Promise.race(running.values());
running.delete(completed.partId);
partResults.push(completed.result);
options.onPartCompleted?.(completed.result);
if (queue.length <= options.refillThreshold) {
await tryPlanMoreParts();
}
continue;
}
if (leaderDone) {
break;
}
await tryPlanMoreParts();
}
return { plannedParts, partResults };
}

View File

@ -0,0 +1,47 @@
import { getLabel } from '../../../shared/i18n/index.js';
import type { PieceEngineOptions } from '../types.js';
import type { ParallelLoggerOptions } from './parallel-logger.js';
export function buildTeamLeaderParallelLoggerOptions(
engineOptions: PieceEngineOptions,
movementName: string,
movementIteration: number,
subMovementNames: string[],
iteration: number,
maxMovements: number,
): ParallelLoggerOptions {
const options: ParallelLoggerOptions = {
subMovementNames,
parentOnStream: engineOptions.onStream,
progressInfo: { iteration, maxMovements },
};
if (engineOptions.taskPrefix != null && engineOptions.taskColorIndex != null) {
return {
...options,
taskLabel: engineOptions.taskPrefix,
taskColorIndex: engineOptions.taskColorIndex,
parentMovementName: movementName,
movementIteration,
};
}
return options;
}
export function emitTeamLeaderProgressHint(
engineOptions: PieceEngineOptions,
kind: 'decompose' | 'feedback',
): void {
const onStream = engineOptions.onStream;
if (!onStream) {
return;
}
const key = kind === 'decompose'
? 'piece.teamLeader.decomposeWait'
: 'piece.teamLeader.feedbackWait';
const text = `${getLabel(key, engineOptions.language)}\n`;
onStream({ type: 'text', data: { text } });
}

View File

@ -17,16 +17,17 @@ export function parsePartDefinitionEntry(entry: unknown, index: number): PartDef
const title = assertNonEmptyString(raw.title, 'title', index); const title = assertNonEmptyString(raw.title, 'title', index);
const instruction = assertNonEmptyString(raw.instruction, 'instruction', index); const instruction = assertNonEmptyString(raw.instruction, 'instruction', index);
const timeoutMs = raw.timeout_ms; const timeoutMsValue = raw.timeout_ms;
if (timeoutMs != null && (typeof timeoutMs !== 'number' || !Number.isInteger(timeoutMs) || timeoutMs <= 0)) { if (timeoutMsValue != null && (typeof timeoutMsValue !== 'number' || !Number.isInteger(timeoutMsValue) || timeoutMsValue <= 0)) {
throw new Error(`Part[${index}] "timeout_ms" must be a positive integer`); throw new Error(`Part[${index}] "timeout_ms" must be a positive integer`);
} }
const timeoutMs = timeoutMsValue == null ? undefined : timeoutMsValue;
return { return {
id, id,
title, title,
instruction, instruction,
timeoutMs: timeoutMs as number | undefined, timeoutMs,
}; };
} }

View File

@ -48,3 +48,22 @@ export function loadDecompositionSchema(maxParts: number): JsonSchema {
(rawParts as Record<string, unknown>).maxItems = maxParts; (rawParts as Record<string, unknown>).maxItems = maxParts;
return schema; return schema;
} }
export function loadMorePartsSchema(maxAdditionalParts: number): JsonSchema {
if (!Number.isInteger(maxAdditionalParts) || maxAdditionalParts <= 0) {
throw new Error(`maxAdditionalParts must be a positive integer: ${maxAdditionalParts}`);
}
const schema = cloneSchema(loadSchema('more-parts.json'));
const properties = schema.properties;
if (!properties || typeof properties !== 'object' || Array.isArray(properties)) {
throw new Error('more-parts schema is invalid: properties is missing');
}
const rawParts = (properties as Record<string, unknown>).parts;
if (!rawParts || typeof rawParts !== 'object' || Array.isArray(rawParts)) {
throw new Error('more-parts schema is invalid: parts is missing');
}
(rawParts as Record<string, unknown>).maxItems = maxAdditionalParts;
return schema;
}

View File

@ -261,6 +261,7 @@ function normalizeTeamLeader(
persona: personaSpec, persona: personaSpec,
personaPath, personaPath,
maxParts: raw.max_parts, maxParts: raw.max_parts,
refillThreshold: raw.refill_threshold,
timeoutMs: raw.timeout_ms, timeoutMs: raw.timeout_ms,
partPersona, partPersona,
partPersonaPath, partPersonaPath,

View File

@ -65,7 +65,7 @@ export async function callMock(
content, content,
timestamp: new Date(), timestamp: new Date(),
sessionId, sessionId,
structuredOutput: options.structuredOutput, structuredOutput: scenarioEntry?.structuredOutput ?? options.structuredOutput,
}; };
} }

View File

@ -143,9 +143,15 @@ function validateEntry(entry: unknown, index: number): ScenarioEntry {
throw new Error(`Scenario entry [${index}] "persona" must be a string if provided`); throw new Error(`Scenario entry [${index}] "persona" must be a string if provided`);
} }
// structured_output is optional
if (obj.structured_output !== undefined && (typeof obj.structured_output !== 'object' || obj.structured_output === null || Array.isArray(obj.structured_output))) {
throw new Error(`Scenario entry [${index}] "structured_output" must be an object if provided`);
}
return { return {
persona: obj.persona as string | undefined, persona: obj.persona as string | undefined,
status: status as ScenarioEntry['status'], status: status as ScenarioEntry['status'],
content: obj.content as string, content: obj.content as string,
structuredOutput: obj.structured_output as Record<string, unknown> | undefined,
}; };
} }

View File

@ -25,4 +25,6 @@ export interface ScenarioEntry {
status: 'done' | 'blocked' | 'error' | 'approved' | 'rejected' | 'improve'; status: 'done' | 'blocked' | 'error' | 'approved' | 'rejected' | 'improve';
/** Response content body */ /** Response content body */
content: string; content: string;
/** Optional structured output payload (for outputSchema-driven flows) */
structuredOutput?: Record<string, unknown>;
} }

View File

@ -59,6 +59,9 @@ interactive:
# ===== Piece Execution UI ===== # ===== Piece Execution UI =====
piece: piece:
teamLeader:
decomposeWait: "Team leader is decomposing tasks. This may take some time..."
feedbackWait: "Team leader is reviewing results and planning next tasks..."
iterationLimit: iterationLimit:
maxReached: "Reached max iterations ({currentIteration}/{maxMovements})" maxReached: "Reached max iterations ({currentIteration}/{maxMovements})"
currentMovement: "Current movement: {currentMovement}" currentMovement: "Current movement: {currentMovement}"

View File

@ -59,6 +59,9 @@ interactive:
# ===== Piece Execution UI ===== # ===== Piece Execution UI =====
piece: piece:
teamLeader:
decomposeWait: "チームリーダーがタスクを分解中です。しばらくお待ちください..."
feedbackWait: "チームリーダーが結果を評価して次のタスクを計画中です..."
iterationLimit: iterationLimit:
maxReached: "最大イテレーションに到達しました ({currentIteration}/{maxMovements})" maxReached: "最大イテレーションに到達しました ({currentIteration}/{maxMovements})"
currentMovement: "現在のムーブメント: {currentMovement}" currentMovement: "現在のムーブメント: {currentMovement}"

View File

@ -25,19 +25,25 @@ The user is NOT asking YOU to do the work, but asking you to create task instruc
| "Review this code" | Create instructions for the piece to review | | "Review this code" | Create instructions for the piece to review |
| "Implement feature X" | Create instructions for the piece to implement | | "Implement feature X" | Create instructions for the piece to implement |
| "Fix this bug" | Create instructions for the piece to fix | | "Fix this bug" | Create instructions for the piece to fix |
| "I want to investigate X" / "I'd like to look into X" | Create instructions for the piece to investigate |
| "Investigate X for me" / "Look into X" | Direct request to you. Use tools to investigate |
Guideline: Distinguish whether the user is asking YOU to do the work, or asking you to create instructions for the PIECE. When ambiguous, default to creating instructions.
## Investigation Guidelines ## Investigation Guidelines
### When Investigation IS Appropriate ### When YOU Should Investigate
When it improves instruction quality: Only when the user clearly directs you to investigate ("look into this for me", "check this", etc.).
- Verifying file or module existence (narrowing targets)
- Understanding project structure (improving instruction accuracy)
- When the user explicitly asks you to investigate
### When Investigation is NOT Appropriate Additionally, minimal checks are allowed without explicit request:
- Verifying file or directory existence (listing names only)
- Checking project directory structure
When agents can investigate on their own: ### When YOU Should NOT Investigate
Everything else. In particular, the following are prohibited unless clearly instructed:
- Reading file contents to understand them
- Implementation details (code internals, dependency analysis) - Implementation details (code internals, dependency analysis)
- Determining how to make changes - Determining how to make changes
- Running tests or builds - Running tests or builds
@ -45,7 +51,8 @@ When agents can investigate on their own:
## Strict Requirements ## Strict Requirements
- Only refine requirements. Actual work is done by piece agents - Only refine requirements. Actual work is done by piece agents
- Do NOT execute tasks yourself. Do NOT use the Task tool to launch pieces or agents
- Do NOT create, edit, or delete files - Do NOT create, edit, or delete files
- Do NOT use Read/Glob/Grep/Bash proactively - Do NOT use Read/Glob/Grep/Bash proactively (unless the user explicitly asks)
- Do NOT mention slash commands - Do NOT mention slash commands
- Do NOT present task instructions during conversation (only when user requests) - Do NOT present task instructions during conversation (only when user requests)

View File

@ -25,19 +25,25 @@
| 「このコードをレビューして」 | ピースにレビューさせる指示書を作成 | | 「このコードをレビューして」 | ピースにレビューさせる指示書を作成 |
| 「機能Xを実装して」 | ピースに実装させる指示書を作成 | | 「機能Xを実装して」 | ピースに実装させる指示書を作成 |
| 「このバグを修正して」 | ピースに修正させる指示書を作成 | | 「このバグを修正して」 | ピースに修正させる指示書を作成 |
| 「〜を調査したい」「〜を調べたい」 | ピースに調査させる指示書を作成 |
| 「〜を調査してほしい」「〜を調べて」 | あなたへの直接の依頼。ツールを使って調査してよい |
判断基準: ユーザーが「あなたに」作業を依頼しているか、「ピースに」やらせたいのかを区別する。曖昧な場合は指示書作成と解釈する。
## 調査の判断基準 ## 調査の判断基準
### 調査してよい場合 ### あなたが調査してよい場合
指示書の質を上げるために有益な場合: ユーザーがあなたに対して明確に調査を指示した場合のみ(「調べてほしい」「確認して」など)。
- ファイルやモジュールの存在確認(対象の絞り込み)
- プロジェクト構造の把握(指示書の精度向上)
- ユーザーが明示的に調査を依頼した場合
### 調査しない場合 加えて、明示的な指示がなくても許可される最小限の確認:
- ファイルやディレクトリの存在確認(名前のリスト取得のみ)
- プロジェクトのディレクトリ構造の確認
エージェントが自分で調査できる内容: ### あなたが調査しない場合
上記以外のすべて。特に次の行為は明確な指示がない限り禁止:
- ファイルの中身を読み込んで内容を把握する行為
- 実装の詳細(コードの中身、依存関係の解析) - 実装の詳細(コードの中身、依存関係の解析)
- 変更方法の特定(どう修正するか) - 変更方法の特定(どう修正するか)
- テスト・ビルドの実行 - テスト・ビルドの実行
@ -45,7 +51,8 @@
## 厳守事項 ## 厳守事項
- 要求の明確化のみを行う。実際の作業はピースのエージェントが行う - 要求の明確化のみを行う。実際の作業はピースのエージェントが行う
- タスクを自分で実行しない。Task ツールでピースやエージェントを起動しない
- ファイルの作成/編集/削除はしない - ファイルの作成/編集/削除はしない
- Read/Glob/Grep/Bash を勝手に使わない - Read/Glob/Grep/Bash を勝手に使わない(ユーザーの明示的な依頼がある場合を除く)
- スラッシュコマンドに言及しない - スラッシュコマンドに言及しない
- 指示書は対話中に勝手に提示しない(ユーザーが要求した場合のみ) - 指示書は対話中に勝手に提示しない(ユーザーが要求した場合のみ)

View File

@ -11,6 +11,9 @@ export default defineConfig({
'e2e/specs/github-issue.e2e.ts', 'e2e/specs/github-issue.e2e.ts',
'e2e/specs/structured-output.e2e.ts', 'e2e/specs/structured-output.e2e.ts',
'e2e/specs/opencode-conversation.e2e.ts', 'e2e/specs/opencode-conversation.e2e.ts',
'e2e/specs/team-leader.e2e.ts',
'e2e/specs/team-leader-worker-pool.e2e.ts',
'e2e/specs/team-leader-refill-threshold.e2e.ts',
], ],
}, },
}); });