Merge pull request #303 from nrslib/release/v0.19.0

Release v0.19.0
This commit is contained in:
nrs 2026-02-18 22:55:20 +09:00 committed by GitHub
commit d69c20ab5d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
66 changed files with 4616 additions and 906 deletions

View File

@ -6,6 +6,34 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
## [0.19.0] - 2026-02-18
### Added
- Dedicated retry mode for failed tasks — conversation loop with failure context (error details, failed movement, last message), run session data, and piece structure injected into the system prompt
- Dedicated instruct system prompt for completed/failed task re-instruction — injects task name, content, branch changes, and retry notes directly into the prompt instead of using the generic interactive prompt
- Direct re-execution from `takt list` — "execute" action now runs the task immediately in the existing worktree instead of only requeuing to pending
- `startReExecution` atomic task transition — moves a completed/failed task directly to running status, avoiding the requeue → claim race condition
- Worktree reuse in task execution — reuses existing clone directory when it's still on disk, skipping branch name generation and clone creation
- Task history injection into interactive and summary system prompts — completed/failed/interrupted task summaries are included for context
- Previous run reference support in interactive and instruct system prompts — users can reference logs and reports from prior runs
- `findRunForTask` and `getRunPaths` helpers for automatic run session lookup by task content
- `isStaleRunningTask` process helper extracted from TaskLifecycleService for reuse
### Changed
- Interactive module split: `interactive.ts` refactored into `interactive-summary.ts`, `runSelector.ts`, `runSessionReader.ts`, and `selectorUtils.ts` for better cohesion
- `requeueTask` now accepts generic `allowedStatuses` parameter instead of only accepting `failed` tasks
- Instruct/retry actions in `takt list` use the worktree path for conversation and run data lookup instead of the project root
- `save_task` action now requeues the task (saves for later execution), while `execute` action runs immediately
### Internal
- Removed `DebugConfig` from models, schemas, and global config — simplified to verbose mode only
- Added stdin simulation test helpers (`stdinSimulator.ts`) for E2E conversation loop testing
- Added comprehensive E2E tests for retry mode, interactive routes, and run session injection
- Added `check:release` npm script for pre-release validation
## [0.18.2] - 2026-02-18
### Added

View File

@ -6,6 +6,34 @@
フォーマットは [Keep a Changelog](https://keepachangelog.com/en/1.1.0/) に基づいています。
## [0.19.0] - 2026-02-18
### Added
- 失敗タスク専用のリトライモードを追加 — 失敗コンテキスト(エラー詳細、失敗ムーブメント、最終メッセージ)、実行セッションデータ、ピース構成をシステムプロンプトに注入する対話ループ
- 完了/失敗タスクの再指示用に専用 instruct システムプロンプトを追加 — タスク名・内容・ブランチ変更・リトライノートを汎用の対話プロンプトではなく直接プロンプトに注入
- `takt list` からの直接再実行 — "execute" アクションで既存ワークツリー内で即座にタスクを実行pending への再キューだけでなく)
- `startReExecution` によるアトミックなタスクステータス遷移 — completed/failed から直接 running に遷移し、requeue → claim のレースコンディションを回避
- タスク実行時のワークツリー再利用 — 既存のクローンディレクトリがディスク上に残っていればそのまま再利用(ブランチ名生成やクローン作成をスキップ)
- 対話モードおよびサマリーシステムプロンプトにタスク履歴を注入 — completed/failed/interrupted タスクのサマリーをコンテキストとして提供
- 対話モードおよび instruct システムプロンプトに前回実行の参照機能 — ログとレポートを参照可能に
- `findRunForTask` / `getRunPaths` ヘルパー — タスク内容による実行セッションの自動検索
- `isStaleRunningTask` プロセスヘルパーを TaskLifecycleService から抽出し再利用可能に
### Changed
- interactive モジュール分割: `interactive.ts``interactive-summary.ts``runSelector.ts``runSessionReader.ts``selectorUtils.ts` にリファクタリング
- `requeueTask` が汎用の `allowedStatuses` パラメータを受け取るように変更(`failed` のみだった制約を解除)
- `takt list` の instruct/retry アクションがプロジェクトルートではなくワークツリーパスを使用して対話と実行データの参照を行うように変更
- `save_task` アクションはタスクを再キュー(後で実行用に保存)、`execute` アクションは即座に実行
### Internal
- `DebugConfig` をモデル・スキーマ・グローバル設定から削除 — verbose モードのみに簡素化
- stdin シミュレーションテストヘルパー(`stdinSimulator.ts`を追加し、E2E 対話ループテストを実現
- リトライモード、対話ルーティング、実行セッション注入の包括的な E2E テストを追加
- `check:release` npm スクリプトを追加(リリース前検証用)
## [0.18.2] - 2026-02-18
### Added

View File

@ -76,10 +76,6 @@ takt --pipeline --task "バグを修正して" --auto-pr
## 使い方
## 実装メモ
- failed タスクの retry とセッション再開: [`docs/implements/retry-and-session.ja.md`](./implements/retry-and-session.ja.md)
### 対話モード
AI との会話でタスク内容を詰めてから実行するモード。タスクの要件が曖昧な場合や、AI と相談しながら内容を整理したい場合に便利です。
@ -94,6 +90,8 @@ takt hello
**注意:** `--task` オプションを指定すると対話モードをスキップして直接タスク実行されます。Issue 参照(`#6``--issue`)は対話モードの初期入力として使用されます。
対話開始時には `takt list` の履歴を自動取得し、`failed` / `interrupted` / `completed` の実行結果を `pieceContext` に注入して会話要約へ反映します。要約では `Worktree ID``開始/終了時刻``最終結果``失敗要約``ログ参照キー` を参照できます。`takt list` の取得に失敗しても対話は継続されます。
**フロー:**
1. ピース選択
2. 対話モード選択assistant / persona / quiet / passthrough
@ -225,6 +223,8 @@ takt list --non-interactive --action delete --branch takt/my-branch --yes
takt list --non-interactive --format json
```
対話モードでは、上記の実行履歴(`failed` / `interrupted` / `completed`)を起動時に再利用し、失敗事例や中断済み実行を再作業対象として特定しやすくします。
#### タスクディレクトリ運用(作成・実行・確認)
1. `takt add` を実行して `.takt/tasks.yaml` に pending レコードが作られることを確認する。
@ -449,7 +449,7 @@ movements:
| 種類 | 構文 | 説明 |
|------|------|------|
| タグベース | `"条件テキスト"` | エージェントが `[STEP:N]` タグを出力し、インデックスでマッチ |
| タグベース | `"条件テキスト"` | エージェントが `[MOVEMENTNAME:N]` タグを出力し、インデックスでマッチ |
| AI判定 | `ai("条件テキスト")` | AIが条件をエージェント出力に対して評価 |
| 集約 | `all("X")` / `any("X")` | パラレルサブムーブメントの結果を集約 |
@ -941,6 +941,7 @@ export TAKT_OPENCODE_API_KEY=...
- [Faceted Prompting](./faceted-prompting.ja.md) - AIプロンプトへの関心の分離Persona, Policy, Instruction, Knowledge, Output Contract
- [Piece Guide](./pieces.md) - ピースの作成とカスタマイズ
- [Agent Guide](./agents.md) - カスタムエージェントの設定
- [Retry and Session](./implements/retry-and-session.ja.md) - failed タスクの retry とセッション再開
- [Changelog](../CHANGELOG.md) ([日本語](./CHANGELOG.ja.md)) - バージョン履歴
- [Security Policy](../SECURITY.md) - 脆弱性報告
- [ブログ: TAKT - AIエージェントオーケストレーション](https://zenn.dev/nrs/articles/c6842288a526d7) - 設計思想と実践的な使い方ガイド

4
package-lock.json generated
View File

@ -1,12 +1,12 @@
{
"name": "takt",
"version": "0.18.2",
"version": "0.19.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "takt",
"version": "0.18.2",
"version": "0.19.0",
"license": "MIT",
"dependencies": {
"@anthropic-ai/claude-agent-sdk": "^0.2.37",

View File

@ -1,6 +1,6 @@
{
"name": "takt",
"version": "0.18.2",
"version": "0.19.0",
"description": "TAKT: TAKT Agent Koordination Topology - AI Agent Piece Orchestration",
"main": "dist/index.js",
"types": "dist/index.d.ts",
@ -25,6 +25,7 @@
"test:e2e:codex": "npm run test:e2e:provider:codex",
"test:e2e:opencode": "npm run test:e2e:provider:opencode",
"lint": "eslint src/",
"check:release": "npm run build && npm run lint && npm run test && npm run test:e2e",
"prepublishOnly": "npm run lint && npm run build && npm run test"
},
"keywords": [

View File

@ -114,6 +114,7 @@ describe('addTask', () => {
expect(task.task_dir).toBeTypeOf('string');
expect(readOrderContent(testDir, task.task_dir)).toContain('JWT認証を実装する');
expect(task.piece).toBe('default');
expect(task.worktree).toBe(true);
});
it('should include worktree settings when enabled', async () => {
@ -125,6 +126,7 @@ describe('addTask', () => {
const task = loadTasks(testDir).tasks[0]!;
expect(task.worktree).toBe('/custom/path');
expect(task.branch).toBe('feat/branch');
expect(task.auto_pr).toBe(true);
});
it('should create task from issue reference without interactive mode', async () => {

View File

@ -56,6 +56,22 @@ vi.mock('../features/interactive/index.js', () => ({
quietMode: vi.fn(),
personaMode: vi.fn(),
resolveLanguage: vi.fn(() => 'en'),
selectRun: vi.fn(() => null),
loadRunSessionContext: vi.fn(),
listRecentRuns: vi.fn(() => []),
normalizeTaskHistorySummary: vi.fn((items: unknown[]) => items),
dispatchConversationAction: vi.fn(async (result: { action: string }, handlers: Record<string, (r: unknown) => unknown>) => {
return handlers[result.action](result);
}),
}));
const mockListAllTaskItems = vi.fn();
const mockIsStaleRunningTask = vi.fn();
vi.mock('../infra/task/index.js', () => ({
TaskRunner: vi.fn(() => ({
listAllTaskItems: mockListAllTaskItems,
})),
isStaleRunningTask: (...args: unknown[]) => mockIsStaleRunningTask(...args),
}));
vi.mock('../infra/config/index.js', () => ({
@ -110,6 +126,7 @@ const mockSelectRecentSession = vi.mocked(selectRecentSession);
const mockLoadGlobalConfig = vi.mocked(loadGlobalConfig);
const mockConfirm = vi.mocked(confirm);
const mockIsDirectTask = vi.mocked(isDirectTask);
const mockTaskRunnerListAllTaskItems = vi.mocked(mockListAllTaskItems);
function createMockIssue(number: number): GitHubIssue {
return {
@ -133,6 +150,8 @@ beforeEach(() => {
mockConfirm.mockResolvedValue(true);
mockIsDirectTask.mockReturnValue(false);
mockParseIssueNumbers.mockReturnValue([]);
mockTaskRunnerListAllTaskItems.mockReturnValue([]);
mockIsStaleRunningTask.mockReturnValue(false);
});
describe('Issue resolution in routing', () => {
@ -262,6 +281,142 @@ describe('Issue resolution in routing', () => {
});
});
describe('task history injection', () => {
it('should include failed/completed/interrupted tasks in pieceContext for interactive mode', async () => {
const failedTask = {
kind: 'failed' as const,
name: 'failed-task',
createdAt: '2026-02-17T00:00:00.000Z',
filePath: '/project/.takt/tasks.yaml',
content: 'failed',
worktreePath: '/tmp/task/failed',
branch: 'takt/failed',
startedAt: '2026-02-17T00:00:00.000Z',
completedAt: '2026-02-17T00:10:00.000Z',
failure: { error: 'syntax error' },
};
const completedTask = {
kind: 'completed' as const,
name: 'completed-task',
createdAt: '2026-02-16T00:00:00.000Z',
filePath: '/project/.takt/tasks.yaml',
content: 'done',
worktreePath: '/tmp/task/completed',
branch: 'takt/completed',
startedAt: '2026-02-16T00:00:00.000Z',
completedAt: '2026-02-16T00:07:00.000Z',
};
const runningTask = {
kind: 'running' as const,
name: 'running-task',
createdAt: '2026-02-15T00:00:00.000Z',
filePath: '/project/.takt/tasks.yaml',
content: 'running',
worktreePath: '/tmp/task/interrupted',
ownerPid: 555,
startedAt: '2026-02-15T00:00:00.000Z',
};
mockTaskRunnerListAllTaskItems.mockReturnValue([failedTask, completedTask, runningTask]);
mockIsStaleRunningTask.mockReturnValue(true);
// When
await executeDefaultAction('add feature');
// Then
expect(mockInteractiveMode).toHaveBeenCalledWith(
'/test/cwd',
'add feature',
expect.objectContaining({
taskHistory: expect.arrayContaining([
expect.objectContaining({
worktreeId: '/tmp/task/failed',
status: 'failed',
finalResult: 'failed',
logKey: 'takt/failed',
}),
expect.objectContaining({
worktreeId: '/tmp/task/completed',
status: 'completed',
finalResult: 'completed',
logKey: 'takt/completed',
}),
expect.objectContaining({
worktreeId: '/tmp/task/interrupted',
status: 'interrupted',
finalResult: 'interrupted',
logKey: '/tmp/task/interrupted',
}),
]),
}),
undefined,
);
});
it('should treat running tasks with no ownerPid as interrupted', async () => {
const runningTaskWithoutPid = {
kind: 'running' as const,
name: 'running-task-no-owner',
createdAt: '2026-02-15T00:00:00.000Z',
filePath: '/project/.takt/tasks.yaml',
content: 'running',
worktreePath: '/tmp/task/running-no-owner',
branch: 'takt/running-no-owner',
startedAt: '2026-02-15T00:00:00.000Z',
};
mockTaskRunnerListAllTaskItems.mockReturnValue([runningTaskWithoutPid]);
mockIsStaleRunningTask.mockReturnValue(true);
await executeDefaultAction('recover interrupted');
expect(mockIsStaleRunningTask).toHaveBeenCalledWith(undefined);
expect(mockInteractiveMode).toHaveBeenCalledWith(
'/test/cwd',
'recover interrupted',
expect.objectContaining({
taskHistory: expect.arrayContaining([
expect.objectContaining({
worktreeId: '/tmp/task/running-no-owner',
status: 'interrupted',
finalResult: 'interrupted',
logKey: 'takt/running-no-owner',
}),
]),
}),
undefined,
);
});
it('should continue interactive mode when task list retrieval fails', async () => {
mockTaskRunnerListAllTaskItems.mockImplementation(() => {
throw new Error('list failed');
});
// When
await executeDefaultAction('fix issue');
// Then
expect(mockInteractiveMode).toHaveBeenCalledWith(
'/test/cwd',
'fix issue',
expect.objectContaining({ taskHistory: [] }),
undefined,
);
});
it('should pass empty taskHistory when task list is empty', async () => {
mockTaskRunnerListAllTaskItems.mockReturnValue([]);
await executeDefaultAction('verify history');
expect(mockInteractiveMode).toHaveBeenCalledWith(
'/test/cwd',
'verify history',
expect.objectContaining({ taskHistory: [] }),
undefined,
);
});
});
describe('interactive mode cancel', () => {
it('should not call selectAndExecuteTask when interactive mode is cancelled', async () => {
// Given
@ -387,4 +542,21 @@ describe('Issue resolution in routing', () => {
);
});
});
describe('run session reference', () => {
it('should not prompt run session reference in default interactive flow', async () => {
await executeDefaultAction();
expect(mockConfirm).not.toHaveBeenCalledWith(
"Reference a previous run's results?",
false,
);
expect(mockInteractiveMode).toHaveBeenCalledWith(
'/test/cwd',
undefined,
expect.anything(),
undefined,
);
});
});
});

View File

@ -62,7 +62,6 @@ vi.mock('../infra/config/index.js', () => ({
initGlobalDirs: vi.fn(),
initProjectDirs: vi.fn(),
loadGlobalConfig: vi.fn(() => ({ logLevel: 'info' })),
getEffectiveDebugConfig: vi.fn(),
}));
vi.mock('../infra/config/paths.js', () => ({

View File

@ -0,0 +1,176 @@
/**
* Stdin simulation helpers for testing interactive conversation loops.
*
* Simulates raw-mode TTY input by intercepting process.stdin events,
* feeding pre-defined input strings one-at-a-time as data events.
*/
import { vi } from 'vitest';
interface SavedStdinState {
isTTY: boolean | undefined;
isRaw: boolean | undefined;
setRawMode: typeof process.stdin.setRawMode | undefined;
stdoutWrite: typeof process.stdout.write;
stdinOn: typeof process.stdin.on;
stdinRemoveListener: typeof process.stdin.removeListener;
stdinResume: typeof process.stdin.resume;
stdinPause: typeof process.stdin.pause;
}
let saved: SavedStdinState | null = null;
/**
* Set up raw stdin simulation with pre-defined inputs.
*
* Each string in rawInputs is delivered as a Buffer via 'data' event
* when the conversation loop registers a listener.
*/
export function setupRawStdin(rawInputs: string[]): void {
saved = {
isTTY: process.stdin.isTTY,
isRaw: process.stdin.isRaw,
setRawMode: process.stdin.setRawMode,
stdoutWrite: process.stdout.write,
stdinOn: process.stdin.on,
stdinRemoveListener: process.stdin.removeListener,
stdinResume: process.stdin.resume,
stdinPause: process.stdin.pause,
};
Object.defineProperty(process.stdin, 'isTTY', { value: true, configurable: true });
Object.defineProperty(process.stdin, 'isRaw', { value: false, configurable: true, writable: true });
process.stdin.setRawMode = vi.fn((mode: boolean) => {
(process.stdin as unknown as { isRaw: boolean }).isRaw = mode;
return process.stdin;
}) as unknown as typeof process.stdin.setRawMode;
process.stdout.write = vi.fn(() => true) as unknown as typeof process.stdout.write;
process.stdin.resume = vi.fn(() => process.stdin) as unknown as typeof process.stdin.resume;
process.stdin.pause = vi.fn(() => process.stdin) as unknown as typeof process.stdin.pause;
let currentHandler: ((data: Buffer) => void) | null = null;
let inputIndex = 0;
process.stdin.on = vi.fn(((event: string, handler: (...args: unknown[]) => void) => {
if (event === 'data') {
currentHandler = handler as (data: Buffer) => void;
if (inputIndex < rawInputs.length) {
const data = rawInputs[inputIndex]!;
inputIndex++;
queueMicrotask(() => {
if (currentHandler) {
currentHandler(Buffer.from(data, 'utf-8'));
}
});
}
}
return process.stdin;
}) as typeof process.stdin.on);
process.stdin.removeListener = vi.fn(((event: string) => {
if (event === 'data') {
currentHandler = null;
}
return process.stdin;
}) as typeof process.stdin.removeListener);
}
/**
* Restore original stdin state after test.
*/
export function restoreStdin(): void {
if (!saved) return;
if (saved.isTTY !== undefined) {
Object.defineProperty(process.stdin, 'isTTY', { value: saved.isTTY, configurable: true });
}
if (saved.isRaw !== undefined) {
Object.defineProperty(process.stdin, 'isRaw', { value: saved.isRaw, configurable: true, writable: true });
}
if (saved.setRawMode) process.stdin.setRawMode = saved.setRawMode;
if (saved.stdoutWrite) process.stdout.write = saved.stdoutWrite;
if (saved.stdinOn) process.stdin.on = saved.stdinOn;
if (saved.stdinRemoveListener) process.stdin.removeListener = saved.stdinRemoveListener;
if (saved.stdinResume) process.stdin.resume = saved.stdinResume;
if (saved.stdinPause) process.stdin.pause = saved.stdinPause;
saved = null;
}
/**
* Convert human-readable inputs to raw stdin data.
*
* Strings get a carriage return appended; null becomes EOF (Ctrl+D).
*/
export function toRawInputs(inputs: (string | null)[]): string[] {
return inputs.map((input) => {
if (input === null) return '\x04';
return input + '\r';
});
}
export interface MockProviderCapture {
systemPrompts: string[];
callCount: number;
prompts: string[];
sessionIds: Array<string | undefined>;
}
/**
* Create a mock provider that captures system prompts and returns
* pre-defined responses. Returns a capture object for assertions.
*/
export function createMockProvider(responses: string[]): { provider: unknown; capture: MockProviderCapture } {
return createScenarioProvider(responses.map((content) => ({ content })));
}
/** A single AI call scenario with configurable status and error behavior. */
export interface CallScenario {
content: string;
status?: 'done' | 'blocked' | 'error';
sessionId?: string;
throws?: Error;
}
/**
* Create a mock provider with per-call scenario control.
*
* Each scenario controls what the AI returns for that call index.
* Captures system prompts, call arguments, and session IDs for assertions.
*/
export function createScenarioProvider(scenarios: CallScenario[]): { provider: unknown; capture: MockProviderCapture } {
const capture: MockProviderCapture = { systemPrompts: [], callCount: 0, prompts: [], sessionIds: [] };
const mockCall = vi.fn(async (prompt: string, options?: { sessionId?: string }) => {
const idx = capture.callCount;
capture.callCount++;
capture.prompts.push(prompt);
capture.sessionIds.push(options?.sessionId);
const scenario = idx < scenarios.length
? scenarios[idx]!
: { content: 'AI response' };
if (scenario.throws) {
throw scenario.throws;
}
return {
persona: 'test',
status: scenario.status ?? ('done' as const),
content: scenario.content,
sessionId: scenario.sessionId,
timestamp: new Date(),
};
});
const provider = {
setup: vi.fn(({ systemPrompt }: { systemPrompt: string }) => {
capture.systemPrompts.push(systemPrompt);
return { call: mockCall };
}),
_call: mockCall,
};
return { provider, capture };
}

View File

@ -3,6 +3,7 @@
*/
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { setupRawStdin, restoreStdin, toRawInputs, createMockProvider } from './helpers/stdinSimulator.js';
vi.mock('../infra/config/global/globalConfig.js', () => ({
loadGlobalConfig: vi.fn(() => ({ provider: 'mock', language: 'en' })),
@ -76,118 +77,16 @@ import { getProvider } from '../infra/providers/index.js';
import { runInstructMode } from '../features/tasks/list/instructMode.js';
import { selectOption } from '../shared/prompt/index.js';
import { info } from '../shared/ui/index.js';
import { loadTemplate } from '../shared/prompts/index.js';
const mockGetProvider = vi.mocked(getProvider);
const mockSelectOption = vi.mocked(selectOption);
const mockInfo = vi.mocked(info);
let savedIsTTY: boolean | undefined;
let savedIsRaw: boolean | undefined;
let savedSetRawMode: typeof process.stdin.setRawMode | undefined;
let savedStdoutWrite: typeof process.stdout.write;
let savedStdinOn: typeof process.stdin.on;
let savedStdinRemoveListener: typeof process.stdin.removeListener;
let savedStdinResume: typeof process.stdin.resume;
let savedStdinPause: typeof process.stdin.pause;
function setupRawStdin(rawInputs: string[]): void {
savedIsTTY = process.stdin.isTTY;
savedIsRaw = process.stdin.isRaw;
savedSetRawMode = process.stdin.setRawMode;
savedStdoutWrite = process.stdout.write;
savedStdinOn = process.stdin.on;
savedStdinRemoveListener = process.stdin.removeListener;
savedStdinResume = process.stdin.resume;
savedStdinPause = process.stdin.pause;
Object.defineProperty(process.stdin, 'isTTY', { value: true, configurable: true });
Object.defineProperty(process.stdin, 'isRaw', { value: false, configurable: true, writable: true });
process.stdin.setRawMode = vi.fn((mode: boolean) => {
(process.stdin as unknown as { isRaw: boolean }).isRaw = mode;
return process.stdin;
}) as unknown as typeof process.stdin.setRawMode;
process.stdout.write = vi.fn(() => true) as unknown as typeof process.stdout.write;
process.stdin.resume = vi.fn(() => process.stdin) as unknown as typeof process.stdin.resume;
process.stdin.pause = vi.fn(() => process.stdin) as unknown as typeof process.stdin.pause;
let currentHandler: ((data: Buffer) => void) | null = null;
let inputIndex = 0;
process.stdin.on = vi.fn(((event: string, handler: (...args: unknown[]) => void) => {
if (event === 'data') {
currentHandler = handler as (data: Buffer) => void;
if (inputIndex < rawInputs.length) {
const data = rawInputs[inputIndex]!;
inputIndex++;
queueMicrotask(() => {
if (currentHandler) {
currentHandler(Buffer.from(data, 'utf-8'));
}
});
}
}
return process.stdin;
}) as typeof process.stdin.on);
process.stdin.removeListener = vi.fn(((event: string) => {
if (event === 'data') {
currentHandler = null;
}
return process.stdin;
}) as typeof process.stdin.removeListener);
}
function restoreStdin(): void {
if (savedIsTTY !== undefined) {
Object.defineProperty(process.stdin, 'isTTY', { value: savedIsTTY, configurable: true });
}
if (savedIsRaw !== undefined) {
Object.defineProperty(process.stdin, 'isRaw', { value: savedIsRaw, configurable: true, writable: true });
}
if (savedSetRawMode) {
process.stdin.setRawMode = savedSetRawMode;
}
if (savedStdoutWrite) {
process.stdout.write = savedStdoutWrite;
}
if (savedStdinOn) {
process.stdin.on = savedStdinOn;
}
if (savedStdinRemoveListener) {
process.stdin.removeListener = savedStdinRemoveListener;
}
if (savedStdinResume) {
process.stdin.resume = savedStdinResume;
}
if (savedStdinPause) {
process.stdin.pause = savedStdinPause;
}
}
function toRawInputs(inputs: (string | null)[]): string[] {
return inputs.map((input) => {
if (input === null) return '\x04';
return input + '\r';
});
}
const mockLoadTemplate = vi.mocked(loadTemplate);
function setupMockProvider(responses: string[]): void {
let callIndex = 0;
const mockCall = vi.fn(async () => {
const content = callIndex < responses.length ? responses[callIndex] : 'AI response';
callIndex++;
return {
persona: 'instruct',
status: 'done' as const,
content: content!,
timestamp: new Date(),
};
});
const mockProvider = {
setup: () => ({ call: mockCall }),
_call: mockCall,
};
mockGetProvider.mockReturnValue(mockProvider);
const { provider } = createMockProvider(responses);
mockGetProvider.mockReturnValue(provider);
}
beforeEach(() => {
@ -204,29 +103,17 @@ describe('runInstructMode', () => {
setupRawStdin(toRawInputs(['/cancel']));
setupMockProvider([]);
const result = await runInstructMode('/project', 'branch context', 'feature-branch');
const result = await runInstructMode('/project', 'branch context', 'feature-branch', 'my-task', 'Do something', '');
expect(result.action).toBe('cancel');
expect(result.task).toBe('');
});
it('should include branch name in intro message', async () => {
setupRawStdin(toRawInputs(['/cancel']));
setupMockProvider([]);
await runInstructMode('/project', 'diff stats', 'my-feature-branch');
const introCall = mockInfo.mock.calls.find((call) =>
call[0]?.includes('my-feature-branch')
);
expect(introCall).toBeDefined();
});
it('should return action=execute with task on /go after conversation', async () => {
setupRawStdin(toRawInputs(['add more tests', '/go']));
setupMockProvider(['What kind of tests?', 'Add unit tests for the feature.']);
const result = await runInstructMode('/project', 'branch context', 'feature-branch');
const result = await runInstructMode('/project', 'branch context', 'feature-branch', 'my-task', 'Do something', '');
expect(result.action).toBe('execute');
expect(result.task).toBe('Add unit tests for the feature.');
@ -237,7 +124,7 @@ describe('runInstructMode', () => {
setupMockProvider(['response', 'Summarized task.']);
mockSelectOption.mockResolvedValue('save_task');
const result = await runInstructMode('/project', 'branch context', 'feature-branch');
const result = await runInstructMode('/project', 'branch context', 'feature-branch', 'my-task', 'Do something', '');
expect(result.action).toBe('save_task');
expect(result.task).toBe('Summarized task.');
@ -248,7 +135,7 @@ describe('runInstructMode', () => {
setupMockProvider(['response', 'Summarized task.']);
mockSelectOption.mockResolvedValueOnce('continue');
const result = await runInstructMode('/project', 'branch context', 'feature-branch');
const result = await runInstructMode('/project', 'branch context', 'feature-branch', 'my-task', 'Do something', '');
expect(result.action).toBe('cancel');
});
@ -257,7 +144,7 @@ describe('runInstructMode', () => {
setupRawStdin(toRawInputs(['/go', '/cancel']));
setupMockProvider([]);
const result = await runInstructMode('/project', 'branch context', 'feature-branch');
const result = await runInstructMode('/project', 'branch context', 'feature-branch', 'my-task', 'Do something', '');
expect(result.action).toBe('cancel');
});
@ -266,7 +153,7 @@ describe('runInstructMode', () => {
setupRawStdin(toRawInputs(['task', '/go']));
setupMockProvider(['response', 'Task summary.']);
await runInstructMode('/project', 'branch context', 'feature-branch');
await runInstructMode('/project', 'branch context', 'feature-branch', 'my-task', 'Do something', '');
const selectCall = mockSelectOption.mock.calls.find((call) =>
Array.isArray(call[1])
@ -279,4 +166,53 @@ describe('runInstructMode', () => {
expect(values).toContain('continue');
expect(values).not.toContain('create_issue');
});
it('should use dedicated instruct system prompt with task context', async () => {
setupRawStdin(toRawInputs(['/cancel']));
setupMockProvider([]);
await runInstructMode('/project', 'branch context', 'feature-branch', 'my-task', 'Do something', 'existing note');
expect(mockLoadTemplate).toHaveBeenCalledWith(
'score_instruct_system_prompt',
'en',
expect.objectContaining({
taskName: 'my-task',
taskContent: 'Do something',
branchName: 'feature-branch',
branchContext: 'branch context',
retryNote: 'existing note',
}),
);
});
it('should inject selected run context into system prompt variables', async () => {
setupRawStdin(toRawInputs(['/cancel']));
setupMockProvider([]);
const runSessionContext = {
task: 'Previous run task',
piece: 'default',
status: 'completed',
movementLogs: [
{ step: 'implement', persona: 'coder', status: 'completed', content: 'done' },
],
reports: [
{ filename: '00-plan.md', content: '# Plan' },
],
};
await runInstructMode('/project', 'branch context', 'feature-branch', 'my-task', 'Do something', '', undefined, runSessionContext);
expect(mockLoadTemplate).toHaveBeenCalledWith(
'score_instruct_system_prompt',
'en',
expect.objectContaining({
hasRunSession: true,
runTask: 'Previous run task',
runPiece: 'default',
runStatus: 'completed',
}),
);
});
});

View File

@ -0,0 +1,102 @@
/**
* Tests for task history context formatting in interactive summary.
*/
import { describe, expect, it } from 'vitest';
import {
buildSummaryPrompt,
formatTaskHistorySummary,
type PieceContext,
type TaskHistorySummaryItem,
} from '../features/interactive/interactive.js';
describe('formatTaskHistorySummary', () => {
it('returns empty string when history is empty', () => {
expect(formatTaskHistorySummary([], 'en')).toBe('');
});
it('formats task history with required fields', () => {
const history: TaskHistorySummaryItem[] = [
{
worktreeId: 'wt-1',
status: 'interrupted',
startedAt: '2026-02-18T00:00:00.000Z',
completedAt: 'N/A',
finalResult: 'interrupted',
failureSummary: undefined,
logKey: 'log-1',
},
{
worktreeId: 'wt-2',
status: 'failed',
startedAt: '2026-02-17T00:00:00.000Z',
completedAt: '2026-02-17T00:01:00.000Z',
finalResult: 'failed',
failureSummary: 'Syntax error in test',
logKey: 'log-2',
},
];
const result = formatTaskHistorySummary(history, 'en');
expect(result).toContain('## Task execution history');
expect(result).toContain('Worktree ID: wt-1');
expect(result).toContain('Status: interrupted');
expect(result).toContain('Failure summary: Syntax error in test');
expect(result).toContain('Log key: log-2');
});
it('normalizes empty start/end timestamps to N/A', () => {
const history: TaskHistorySummaryItem[] = [
{
worktreeId: 'wt-3',
status: 'interrupted',
startedAt: '',
completedAt: '',
finalResult: 'interrupted',
failureSummary: undefined,
logKey: 'log-3',
},
];
const result = formatTaskHistorySummary(history, 'en');
expect(result).toContain('Start/End: N/A / N/A');
});
});
describe('buildSummaryPrompt', () => {
it('includes taskHistory context when provided', () => {
const history: TaskHistorySummaryItem[] = [
{
worktreeId: 'wt-1',
status: 'completed',
startedAt: '2026-02-10T00:00:00.000Z',
completedAt: '2026-02-10T00:00:30.000Z',
finalResult: 'completed',
failureSummary: undefined,
logKey: 'log-1',
},
];
const pieceContext: PieceContext = {
name: 'my-piece',
description: 'desc',
pieceStructure: '',
movementPreviews: [],
taskHistory: history,
};
const summary = buildSummaryPrompt(
[{ role: 'user', content: 'Improve parser' }],
false,
'en',
'No transcript',
'Conversation:',
pieceContext,
);
expect(summary).toContain('## Task execution history');
expect(summary).toContain('Worktree ID: wt-1');
expect(summary).toContain('Conversation:');
expect(summary).toContain('User: Improve parser');
});
});

View File

@ -3,6 +3,7 @@
*/
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { setupRawStdin, restoreStdin, toRawInputs, createMockProvider } from './helpers/stdinSimulator.js';
vi.mock('../infra/config/global/globalConfig.js', () => ({
loadGlobalConfig: vi.fn(() => ({ provider: 'mock', language: 'en' })),
@ -56,132 +57,9 @@ import { selectOption } from '../shared/prompt/index.js';
const mockGetProvider = vi.mocked(getProvider);
const mockSelectOption = vi.mocked(selectOption);
// Store original stdin/stdout properties to restore
let savedIsTTY: boolean | undefined;
let savedIsRaw: boolean | undefined;
let savedSetRawMode: typeof process.stdin.setRawMode | undefined;
let savedStdoutWrite: typeof process.stdout.write;
let savedStdinOn: typeof process.stdin.on;
let savedStdinRemoveListener: typeof process.stdin.removeListener;
let savedStdinResume: typeof process.stdin.resume;
let savedStdinPause: typeof process.stdin.pause;
/**
* Captures the current data handler and provides sendData.
*
* When readMultilineInput registers process.stdin.on('data', handler),
* this captures the handler so tests can send raw input data.
*
* rawInputs: array of raw strings to send sequentially. Each time a new
* 'data' listener is registered, the next raw input is sent via queueMicrotask.
*/
function setupRawStdin(rawInputs: string[]): void {
savedIsTTY = process.stdin.isTTY;
savedIsRaw = process.stdin.isRaw;
savedSetRawMode = process.stdin.setRawMode;
savedStdoutWrite = process.stdout.write;
savedStdinOn = process.stdin.on;
savedStdinRemoveListener = process.stdin.removeListener;
savedStdinResume = process.stdin.resume;
savedStdinPause = process.stdin.pause;
Object.defineProperty(process.stdin, 'isTTY', { value: true, configurable: true });
Object.defineProperty(process.stdin, 'isRaw', { value: false, configurable: true, writable: true });
process.stdin.setRawMode = vi.fn((mode: boolean) => {
(process.stdin as unknown as { isRaw: boolean }).isRaw = mode;
return process.stdin;
}) as unknown as typeof process.stdin.setRawMode;
process.stdout.write = vi.fn(() => true) as unknown as typeof process.stdout.write;
process.stdin.resume = vi.fn(() => process.stdin) as unknown as typeof process.stdin.resume;
process.stdin.pause = vi.fn(() => process.stdin) as unknown as typeof process.stdin.pause;
let currentHandler: ((data: Buffer) => void) | null = null;
let inputIndex = 0;
process.stdin.on = vi.fn(((event: string, handler: (...args: unknown[]) => void) => {
if (event === 'data') {
currentHandler = handler as (data: Buffer) => void;
// Send next input when handler is registered
if (inputIndex < rawInputs.length) {
const data = rawInputs[inputIndex]!;
inputIndex++;
queueMicrotask(() => {
if (currentHandler) {
currentHandler(Buffer.from(data, 'utf-8'));
}
});
}
}
return process.stdin;
}) as typeof process.stdin.on);
process.stdin.removeListener = vi.fn(((event: string) => {
if (event === 'data') {
currentHandler = null;
}
return process.stdin;
}) as typeof process.stdin.removeListener);
}
function restoreStdin(): void {
if (savedIsTTY !== undefined) {
Object.defineProperty(process.stdin, 'isTTY', { value: savedIsTTY, configurable: true });
}
if (savedIsRaw !== undefined) {
Object.defineProperty(process.stdin, 'isRaw', { value: savedIsRaw, configurable: true, writable: true });
}
if (savedSetRawMode) {
process.stdin.setRawMode = savedSetRawMode;
}
if (savedStdoutWrite) {
process.stdout.write = savedStdoutWrite;
}
if (savedStdinOn) {
process.stdin.on = savedStdinOn;
}
if (savedStdinRemoveListener) {
process.stdin.removeListener = savedStdinRemoveListener;
}
if (savedStdinResume) {
process.stdin.resume = savedStdinResume;
}
if (savedStdinPause) {
process.stdin.pause = savedStdinPause;
}
}
/**
* Convert user-level inputs to raw stdin data.
*
* Each element is either:
* - A string: sent as typed characters + Enter (\r)
* - null: sent as Ctrl+D (\x04)
*/
function toRawInputs(inputs: (string | null)[]): string[] {
return inputs.map((input) => {
if (input === null) return '\x04';
return input + '\r';
});
}
/** Create a mock provider that returns given responses */
function setupMockProvider(responses: string[]): void {
let callIndex = 0;
const mockCall = vi.fn(async () => {
const content = callIndex < responses.length ? responses[callIndex] : 'AI response';
callIndex++;
return {
persona: 'interactive',
status: 'done' as const,
content: content!,
timestamp: new Date(),
};
});
const mockProvider = {
setup: () => ({ call: mockCall }),
_call: mockCall,
};
mockGetProvider.mockReturnValue(mockProvider);
const { provider } = createMockProvider(responses);
mockGetProvider.mockReturnValue(provider);
}
beforeEach(() => {
@ -387,6 +265,63 @@ describe('interactiveMode', () => {
);
});
it('should include run session context in system prompt when provided', async () => {
// Given
setupRawStdin(toRawInputs(['hello', '/cancel']));
const mockSetup = vi.fn();
const mockCall = vi.fn(async () => ({
persona: 'interactive',
status: 'done' as const,
content: 'AI response',
timestamp: new Date(),
}));
mockSetup.mockReturnValue({ call: mockCall });
mockGetProvider.mockReturnValue({ setup: mockSetup, _call: mockCall } as unknown as ReturnType<typeof getProvider>);
const runSessionContext = {
task: 'Previous run task',
piece: 'default',
status: 'completed',
movementLogs: [{ step: 'implement', persona: 'coder', status: 'completed', content: 'Implementation done' }],
reports: [],
};
// When
await interactiveMode('/project', undefined, undefined, undefined, runSessionContext);
// Then: system prompt should contain run session content
expect(mockSetup).toHaveBeenCalled();
const setupArgs = mockSetup.mock.calls[0]![0] as { systemPrompt: string };
expect(setupArgs.systemPrompt).toContain('Previous run task');
expect(setupArgs.systemPrompt).toContain('default');
expect(setupArgs.systemPrompt).toContain('completed');
expect(setupArgs.systemPrompt).toContain('implement');
expect(setupArgs.systemPrompt).toContain('Implementation done');
expect(setupArgs.systemPrompt).toContain('Previous Run Reference');
});
it('should not include run session section in system prompt when not provided', async () => {
// Given
setupRawStdin(toRawInputs(['hello', '/cancel']));
const mockSetup = vi.fn();
const mockCall = vi.fn(async () => ({
persona: 'interactive',
status: 'done' as const,
content: 'AI response',
timestamp: new Date(),
}));
mockSetup.mockReturnValue({ call: mockCall });
mockGetProvider.mockReturnValue({ setup: mockSetup, _call: mockCall } as unknown as ReturnType<typeof getProvider>);
// When
await interactiveMode('/project');
// Then: system prompt should NOT contain run session section
expect(mockSetup).toHaveBeenCalled();
const setupArgs = mockSetup.mock.calls[0]![0] as { systemPrompt: string };
expect(setupArgs.systemPrompt).not.toContain('Previous Run Reference');
});
it('should abort in-flight provider call on SIGINT during initial input', async () => {
mockGetProvider.mockReturnValue({
setup: () => ({

View File

@ -0,0 +1,428 @@
/**
* E2E tests for interactive conversation loop routes.
*
* Exercises the real runConversationLoop via runInstructMode,
* simulating user stdin and verifying each conversation path.
*
* Real: runConversationLoop, callAIWithRetry, readMultilineInput,
* buildSummaryPrompt, selectPostSummaryAction
* Mocked: provider (scenario-based), config, UI, session persistence
*/
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import {
setupRawStdin,
restoreStdin,
toRawInputs,
createMockProvider,
createScenarioProvider,
type MockProviderCapture,
} from './helpers/stdinSimulator.js';
// --- Infrastructure mocks (same pattern as instructMode.test.ts) ---
vi.mock('../infra/config/global/globalConfig.js', () => ({
loadGlobalConfig: vi.fn(() => ({ provider: 'mock', language: 'en' })),
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
}));
vi.mock('../infra/providers/index.js', () => ({
getProvider: vi.fn(),
}));
vi.mock('../shared/utils/index.js', async (importOriginal) => ({
...(await importOriginal<Record<string, unknown>>()),
createLogger: () => ({ info: vi.fn(), debug: vi.fn(), error: vi.fn() }),
}));
vi.mock('../shared/context.js', () => ({
isQuietMode: vi.fn(() => false),
}));
vi.mock('../infra/config/paths.js', async (importOriginal) => ({
...(await importOriginal<Record<string, unknown>>()),
loadPersonaSessions: vi.fn(() => ({})),
updatePersonaSession: vi.fn(),
getProjectConfigDir: vi.fn(() => '/tmp'),
loadSessionState: vi.fn(() => null),
clearSessionState: vi.fn(),
}));
vi.mock('../shared/ui/index.js', () => ({
info: vi.fn(),
error: vi.fn(),
blankLine: vi.fn(),
StreamDisplay: vi.fn().mockImplementation(() => ({
createHandler: vi.fn(() => vi.fn()),
flush: vi.fn(),
})),
}));
vi.mock('../shared/prompt/index.js', () => ({
selectOption: vi.fn().mockResolvedValue('execute'),
}));
vi.mock('../shared/i18n/index.js', () => ({
getLabel: vi.fn((_key: string, _lang: string) => 'Mock label'),
getLabelObject: vi.fn(() => ({
intro: 'Intro',
resume: 'Resume',
noConversation: 'No conversation',
summarizeFailed: 'Summarize failed',
continuePrompt: 'Continue?',
proposed: 'Proposed:',
actionPrompt: 'What next?',
playNoTask: 'No task for /play',
cancelled: 'Cancelled',
actions: { execute: 'Execute', saveTask: 'Save', continue: 'Continue' },
})),
}));
// --- Imports (after mocks) ---
import { getProvider } from '../infra/providers/index.js';
import { selectOption } from '../shared/prompt/index.js';
import { error as logError } from '../shared/ui/index.js';
import { runInstructMode } from '../features/tasks/list/instructMode.js';
const mockGetProvider = vi.mocked(getProvider);
const mockSelectOption = vi.mocked(selectOption);
const mockLogError = vi.mocked(logError);
// --- Helpers ---
function setupProvider(responses: string[]): MockProviderCapture {
const { provider, capture } = createMockProvider(responses);
mockGetProvider.mockReturnValue(provider);
return capture;
}
function setupScenarioProvider(...scenarios: Parameters<typeof createScenarioProvider>[0]): MockProviderCapture {
const { provider, capture } = createScenarioProvider(scenarios);
mockGetProvider.mockReturnValue(provider);
return capture;
}
async function runInstruct() {
return runInstructMode('/test', '', 'takt/test-branch', 'test-branch', '', '');
}
beforeEach(() => {
vi.clearAllMocks();
mockSelectOption.mockResolvedValue('execute');
});
afterEach(() => {
restoreStdin();
});
// =================================================================
// Route A: EOF (Ctrl+D) → cancel
// =================================================================
describe('EOF handling', () => {
it('should cancel on Ctrl+D without any conversation', async () => {
setupRawStdin(toRawInputs([null]));
setupProvider([]);
const result = await runInstruct();
expect(result.action).toBe('cancel');
expect(result.task).toBe('');
});
it('should cancel on Ctrl+D after some conversation', async () => {
setupRawStdin(toRawInputs(['hello', null]));
const capture = setupProvider(['Hi there.']);
const result = await runInstruct();
expect(result.action).toBe('cancel');
expect(capture.callCount).toBe(1);
});
});
// =================================================================
// Route B: Empty input → skip, continue loop
// =================================================================
describe('empty input handling', () => {
it('should skip empty lines and continue accepting input', async () => {
setupRawStdin(toRawInputs(['', ' ', '/cancel']));
const capture = setupProvider([]);
const result = await runInstruct();
expect(result.action).toBe('cancel');
expect(capture.callCount).toBe(0);
});
});
// =================================================================
// Route C: /play → direct execute
// =================================================================
describe('/play command', () => {
it('should return execute with the given task text', async () => {
setupRawStdin(toRawInputs(['/play fix the login bug']));
setupProvider([]);
const result = await runInstruct();
expect(result.action).toBe('execute');
expect(result.task).toBe('fix the login bug');
});
it('should show error and continue when /play has no task', async () => {
setupRawStdin(toRawInputs(['/play', '/cancel']));
setupProvider([]);
const result = await runInstruct();
expect(result.action).toBe('cancel');
});
});
// =================================================================
// Route D: /go → summary flow
// =================================================================
describe('/go summary flow', () => {
it('should summarize conversation and return execute', async () => {
// User: "add error handling" → AI: "What kind?" → /go → AI summary → execute
setupRawStdin(toRawInputs(['add error handling', '/go']));
const capture = setupProvider(['What kind of error handling?', 'Add try-catch to all API calls.']);
const result = await runInstruct();
expect(result.action).toBe('execute');
expect(result.task).toBe('Add try-catch to all API calls.');
expect(capture.callCount).toBe(2);
});
it('should reject /go without prior conversation', async () => {
setupRawStdin(toRawInputs(['/go', '/cancel']));
setupProvider([]);
const result = await runInstruct();
expect(result.action).toBe('cancel');
});
it('should continue editing when user selects continue after /go', async () => {
setupRawStdin(toRawInputs(['task description', '/go', '/cancel']));
setupProvider(['Understood.', 'Summary of task.']);
mockSelectOption.mockResolvedValueOnce('continue');
const result = await runInstruct();
expect(result.action).toBe('cancel');
});
it('should return save_task when user selects save_task after /go', async () => {
setupRawStdin(toRawInputs(['implement feature', '/go']));
setupProvider(['Got it.', 'Implement the feature.']);
mockSelectOption.mockResolvedValue('save_task');
const result = await runInstruct();
expect(result.action).toBe('save_task');
expect(result.task).toBe('Implement the feature.');
});
});
// =================================================================
// Route D2: /go with user note
// =================================================================
describe('/go with user note', () => {
it('should append user note to summary prompt', async () => {
setupRawStdin(toRawInputs(['refactor auth', '/go also check security']));
const capture = setupProvider(['Will do.', 'Refactor auth and check security.']);
const result = await runInstruct();
expect(result.action).toBe('execute');
expect(result.task).toBe('Refactor auth and check security.');
// /go summary call should include the user note in the prompt
expect(capture.prompts[1]).toContain('also check security');
});
});
// =================================================================
// Route D3: /go summary AI returns null (call failure)
// =================================================================
describe('/go summary AI failure', () => {
it('should show error and allow retry when summary AI throws', async () => {
// Turn 1: normal message → success
// Turn 2: /go → AI throws (summary fails) → "summarize failed"
// Turn 3: /cancel
setupRawStdin(toRawInputs(['describe task', '/go', '/cancel']));
const capture = setupScenarioProvider(
{ content: 'Understood.' },
{ content: '', throws: new Error('API timeout') },
);
const result = await runInstruct();
expect(result.action).toBe('cancel');
expect(capture.callCount).toBe(2);
});
});
// =================================================================
// Route D4: /go summary AI returns blocked status
// =================================================================
describe('/go summary AI blocked', () => {
it('should cancel when summary AI returns blocked', async () => {
setupRawStdin(toRawInputs(['some task', '/go']));
setupScenarioProvider(
{ content: 'OK.' },
{ content: 'Permission denied', status: 'blocked' },
);
const result = await runInstruct();
expect(result.action).toBe('cancel');
expect(mockLogError).toHaveBeenCalledWith('Permission denied');
});
});
// =================================================================
// Route E: /cancel
// =================================================================
describe('/cancel command', () => {
it('should cancel immediately', async () => {
setupRawStdin(toRawInputs(['/cancel']));
setupProvider([]);
const result = await runInstruct();
expect(result.action).toBe('cancel');
});
it('should cancel mid-conversation', async () => {
setupRawStdin(toRawInputs(['hello', 'world', '/cancel']));
const capture = setupProvider(['Hi.', 'Hello again.']);
const result = await runInstruct();
expect(result.action).toBe('cancel');
expect(capture.callCount).toBe(2);
});
});
// =================================================================
// Route F: Regular messages → AI conversation
// =================================================================
describe('regular conversation', () => {
it('should handle multi-turn conversation ending with /go', async () => {
setupRawStdin(toRawInputs([
'I need to add pagination',
'Use cursor-based pagination',
'Also add sorting',
'/go',
]));
const capture = setupProvider([
'What kind of pagination?',
'Cursor-based is a good choice.',
'OK, pagination with sorting.',
'Add cursor-based pagination and sorting to the API.',
]);
const result = await runInstruct();
expect(result.action).toBe('execute');
expect(result.task).toBe('Add cursor-based pagination and sorting to the API.');
expect(capture.callCount).toBe(4);
});
});
// =================================================================
// Route F2: Regular message AI returns blocked
// =================================================================
describe('regular message AI blocked', () => {
it('should cancel when regular message AI returns blocked', async () => {
setupRawStdin(toRawInputs(['hello']));
setupScenarioProvider(
{ content: 'Rate limited', status: 'blocked' },
);
const result = await runInstruct();
expect(result.action).toBe('cancel');
expect(mockLogError).toHaveBeenCalledWith('Rate limited');
});
});
// =================================================================
// Route G: /play command with empty task shows error
// =================================================================
describe('/play empty task error', () => {
it('should show error message when /play has no argument', async () => {
setupRawStdin(toRawInputs(['/play', '/play ', '/cancel']));
setupProvider([]);
const result = await runInstruct();
expect(result.action).toBe('cancel');
// /play with no task should not trigger any AI calls
});
});
// =================================================================
// Session management: new sessionId propagates across calls
// =================================================================
describe('session propagation', () => {
it('should use sessionId from first call in subsequent calls', async () => {
setupRawStdin(toRawInputs(['first message', 'second message', '/go']));
const capture = setupScenarioProvider(
{ content: 'Response 1.', sessionId: 'session-abc' },
{ content: 'Response 2.' },
{ content: 'Final summary.' },
);
const result = await runInstruct();
expect(result.action).toBe('execute');
expect(result.task).toBe('Final summary.');
// Second call should receive the sessionId from first call
expect(capture.sessionIds[1]).toBe('session-abc');
});
});
// =================================================================
// Policy injection: transformPrompt wraps user input
// =================================================================
describe('policy injection', () => {
it('should wrap user messages with policy content', async () => {
setupRawStdin(toRawInputs(['fix the bug', '/cancel']));
const capture = setupProvider(['OK.']);
await runInstructMode('/test', '', 'takt/test', 'test', '', '');
// The prompt sent to AI should contain Policy section
expect(capture.prompts[0]).toContain('Policy');
expect(capture.prompts[0]).toContain('fix the bug');
expect(capture.prompts[0]).toContain('Policy Reminder');
});
});
// =================================================================
// System prompt: branch name appears in intro
// =================================================================
describe('branch context', () => {
it('should include branch name and context in system prompt', async () => {
setupRawStdin(toRawInputs(['check changes', '/cancel']));
const capture = setupProvider(['Looks good.']);
await runInstructMode(
'/test',
'## Changes\n```\nsrc/auth.ts | 50 +++\n```',
'takt/feature-auth',
'feature-auth',
'Do something',
'',
);
expect(capture.systemPrompts.length).toBeGreaterThan(0);
const systemPrompt = capture.systemPrompts[0]!;
expect(systemPrompt).toContain('takt/feature-auth');
expect(systemPrompt).toContain('src/auth.ts | 50 +++');
});
});

View File

@ -0,0 +1,410 @@
/**
* E2E test: Retry mode with failure context and run session injection.
*
* Simulates the retry assistant flow:
* 1. Create .takt/runs/ fixtures (logs, reports)
* 2. Build RetryContext with failure info + run session
* 3. Run retry mode with stdin simulation (user types message /go)
* 4. Mock provider captures the system prompt sent to AI
* 5. Verify failure info AND run session data appear in the system prompt
*
* Real: buildRetryTemplateVars, loadTemplate, runConversationLoop,
* loadRunSessionContext, formatRunSessionForPrompt, getRunPaths
* Mocked: provider (captures system prompt), config, UI, session persistence
*/
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { mkdirSync, writeFileSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import {
setupRawStdin,
restoreStdin,
toRawInputs,
createMockProvider,
type MockProviderCapture,
} from './helpers/stdinSimulator.js';
// --- Mocks (infrastructure only) ---
vi.mock('../infra/fs/session.js', () => ({
loadNdjsonLog: vi.fn(),
}));
vi.mock('../infra/config/global/globalConfig.js', () => ({
loadGlobalConfig: vi.fn(() => ({ provider: 'mock', language: 'en' })),
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
}));
vi.mock('../infra/providers/index.js', () => ({
getProvider: vi.fn(),
}));
vi.mock('../shared/utils/index.js', async (importOriginal) => ({
...(await importOriginal<Record<string, unknown>>()),
createLogger: () => ({
info: vi.fn(),
debug: vi.fn(),
error: vi.fn(),
}),
}));
vi.mock('../shared/context.js', () => ({
isQuietMode: vi.fn(() => false),
}));
vi.mock('../infra/config/paths.js', async (importOriginal) => ({
...(await importOriginal<Record<string, unknown>>()),
loadPersonaSessions: vi.fn(() => ({})),
updatePersonaSession: vi.fn(),
getProjectConfigDir: vi.fn(() => '/tmp'),
loadSessionState: vi.fn(() => null),
clearSessionState: vi.fn(),
}));
vi.mock('../shared/ui/index.js', () => ({
info: vi.fn(),
error: vi.fn(),
blankLine: vi.fn(),
StreamDisplay: vi.fn().mockImplementation(() => ({
createHandler: vi.fn(() => vi.fn()),
flush: vi.fn(),
})),
}));
vi.mock('../shared/prompt/index.js', () => ({
selectOption: vi.fn().mockResolvedValue('execute'),
}));
vi.mock('../shared/i18n/index.js', () => ({
getLabel: vi.fn((_key: string, _lang: string) => 'Mock label'),
getLabelObject: vi.fn(() => ({
intro: 'Retry intro',
resume: 'Resume',
noConversation: 'No conversation',
summarizeFailed: 'Summarize failed',
continuePrompt: 'Continue?',
proposed: 'Proposed:',
actionPrompt: 'What next?',
playNoTask: 'No task',
cancelled: 'Cancelled',
actions: { execute: 'Execute', saveTask: 'Save', continue: 'Continue' },
})),
}));
// --- Imports (after mocks) ---
import { getProvider } from '../infra/providers/index.js';
import { loadNdjsonLog } from '../infra/fs/session.js';
import {
loadRunSessionContext,
formatRunSessionForPrompt,
getRunPaths,
} from '../features/interactive/runSessionReader.js';
import { runRetryMode, type RetryContext } from '../features/interactive/retryMode.js';
const mockGetProvider = vi.mocked(getProvider);
const mockLoadNdjsonLog = vi.mocked(loadNdjsonLog);
// --- Fixture helpers ---
function createTmpDir(): string {
const dir = join(tmpdir(), `takt-retry-e2e-${Date.now()}-${Math.random().toString(36).slice(2)}`);
mkdirSync(dir, { recursive: true });
return dir;
}
function createRunFixture(
cwd: string,
slug: string,
overrides?: {
meta?: Record<string, unknown>;
reports?: Array<{ name: string; content: string }>;
},
): void {
const runDir = join(cwd, '.takt', 'runs', slug);
mkdirSync(join(runDir, 'logs'), { recursive: true });
mkdirSync(join(runDir, 'reports'), { recursive: true });
const meta = {
task: `Task for ${slug}`,
piece: 'default',
status: 'completed',
startTime: '2026-02-01T00:00:00.000Z',
logsDirectory: `.takt/runs/${slug}/logs`,
reportDirectory: `.takt/runs/${slug}/reports`,
runSlug: slug,
...overrides?.meta,
};
writeFileSync(join(runDir, 'meta.json'), JSON.stringify(meta), 'utf-8');
writeFileSync(join(runDir, 'logs', 'session-001.jsonl'), '{}', 'utf-8');
for (const report of overrides?.reports ?? []) {
writeFileSync(join(runDir, 'reports', report.name), report.content, 'utf-8');
}
}
function setupMockNdjsonLog(history: Array<{ step: string; persona: string; status: string; content: string }>): void {
mockLoadNdjsonLog.mockReturnValue({
task: 'mock',
projectDir: '',
pieceName: 'default',
iterations: history.length,
startTime: '2026-02-01T00:00:00.000Z',
status: 'completed',
history: history.map((h) => ({
...h,
instruction: '',
timestamp: '2026-02-01T00:00:00.000Z',
})),
});
}
function setupProvider(responses: string[]): MockProviderCapture {
const { provider, capture } = createMockProvider(responses);
mockGetProvider.mockReturnValue(provider);
return capture;
}
// --- Tests ---
describe('E2E: Retry mode with failure context injection', () => {
let tmpDir: string;
beforeEach(() => {
tmpDir = createTmpDir();
vi.clearAllMocks();
});
afterEach(() => {
restoreStdin();
rmSync(tmpDir, { recursive: true, force: true });
});
it('should inject failure info into system prompt', async () => {
setupRawStdin(toRawInputs(['what went wrong?', '/go']));
const capture = setupProvider([
'The review step failed due to a timeout.',
'Fix review timeout by increasing the limit.',
]);
const retryContext: RetryContext = {
failure: {
taskName: 'implement-auth',
createdAt: '2026-02-15T10:00:00Z',
failedMovement: 'review',
error: 'Timeout after 300s',
lastMessage: 'Agent stopped responding',
retryNote: '',
},
branchName: 'takt/implement-auth',
pieceContext: {
name: 'default',
description: '',
pieceStructure: '',
movementPreviews: [],
},
run: null,
};
const result = await runRetryMode(tmpDir, retryContext);
// Verify: system prompt contains failure information
expect(capture.systemPrompts.length).toBeGreaterThan(0);
const systemPrompt = capture.systemPrompts[0]!;
expect(systemPrompt).toContain('Retry Assistant');
expect(systemPrompt).toContain('implement-auth');
expect(systemPrompt).toContain('takt/implement-auth');
expect(systemPrompt).toContain('review');
expect(systemPrompt).toContain('Timeout after 300s');
expect(systemPrompt).toContain('Agent stopped responding');
// Verify: flow completed
expect(result.action).toBe('execute');
expect(result.task).toBe('Fix review timeout by increasing the limit.');
expect(capture.callCount).toBe(2);
});
it('should inject failure info AND run session data into system prompt', async () => {
// Create run fixture with logs and reports
createRunFixture(tmpDir, 'run-failed', {
meta: { task: 'Build login page', status: 'failed' },
reports: [
{ name: '00-plan.md', content: '# Plan\n\nLogin form with OAuth2.' },
],
});
setupMockNdjsonLog([
{ step: 'plan', persona: 'architect', status: 'completed', content: 'Planned OAuth2 login flow' },
{ step: 'implement', persona: 'coder', status: 'failed', content: 'Failed at CSS compilation' },
]);
// Load real run session data
const sessionContext = loadRunSessionContext(tmpDir, 'run-failed');
const formatted = formatRunSessionForPrompt(sessionContext);
const paths = getRunPaths(tmpDir, 'run-failed');
setupRawStdin(toRawInputs(['fix the CSS issue', '/go']));
const capture = setupProvider([
'The CSS compilation error is likely due to missing imports.',
'Fix CSS imports in login component.',
]);
const retryContext: RetryContext = {
failure: {
taskName: 'build-login',
createdAt: '2026-02-15T14:00:00Z',
failedMovement: 'implement',
error: 'CSS compilation failed',
lastMessage: 'PostCSS error: unknown property',
retryNote: '',
},
branchName: 'takt/build-login',
pieceContext: {
name: 'default',
description: '',
pieceStructure: '',
movementPreviews: [],
},
run: {
logsDir: paths.logsDir,
reportsDir: paths.reportsDir,
task: formatted.runTask,
piece: formatted.runPiece,
status: formatted.runStatus,
movementLogs: formatted.runMovementLogs,
reports: formatted.runReports,
},
};
const result = await runRetryMode(tmpDir, retryContext);
// Verify: system prompt contains BOTH failure info and run session data
const systemPrompt = capture.systemPrompts[0]!;
// Failure info
expect(systemPrompt).toContain('build-login');
expect(systemPrompt).toContain('CSS compilation failed');
expect(systemPrompt).toContain('PostCSS error: unknown property');
expect(systemPrompt).toContain('implement');
// Run session data
expect(systemPrompt).toContain('Previous Run Data');
expect(systemPrompt).toContain('Build login page');
expect(systemPrompt).toContain('Planned OAuth2 login flow');
expect(systemPrompt).toContain('Failed at CSS compilation');
expect(systemPrompt).toContain('00-plan.md');
expect(systemPrompt).toContain('Login form with OAuth2');
// Run paths (AI can use Read tool)
expect(systemPrompt).toContain(paths.logsDir);
expect(systemPrompt).toContain(paths.reportsDir);
// Flow completed
expect(result.action).toBe('execute');
expect(result.task).toBe('Fix CSS imports in login component.');
});
it('should include existing retry note in system prompt', async () => {
setupRawStdin(toRawInputs(['what should I do?', '/go']));
const capture = setupProvider([
'Based on the previous attempt, the mocks are still incomplete.',
'Add complete mocks for all API endpoints.',
]);
const retryContext: RetryContext = {
failure: {
taskName: 'fix-tests',
createdAt: '2026-02-15T16:00:00Z',
failedMovement: '',
error: 'Test suite failed',
lastMessage: '',
retryNote: 'Previous attempt: added missing mocks but still failing',
},
branchName: 'takt/fix-tests',
pieceContext: {
name: 'default',
description: '',
pieceStructure: '',
movementPreviews: [],
},
run: null,
};
await runRetryMode(tmpDir, retryContext);
const systemPrompt = capture.systemPrompts[0]!;
expect(systemPrompt).toContain('Existing Retry Note');
expect(systemPrompt).toContain('Previous attempt: added missing mocks but still failing');
// absent fields should NOT appear as sections
expect(systemPrompt).not.toContain('Failed movement');
expect(systemPrompt).not.toContain('Last Message');
});
it('should cancel cleanly and not crash', async () => {
setupRawStdin(toRawInputs(['/cancel']));
setupProvider([]);
const retryContext: RetryContext = {
failure: {
taskName: 'some-task',
createdAt: '2026-02-15T12:00:00Z',
failedMovement: 'plan',
error: 'Unknown error',
lastMessage: '',
retryNote: '',
},
branchName: 'takt/some-task',
pieceContext: {
name: 'default',
description: '',
pieceStructure: '',
movementPreviews: [],
},
run: null,
};
const result = await runRetryMode(tmpDir, retryContext);
expect(result.action).toBe('cancel');
expect(result.task).toBe('');
});
it('should handle conversation before /go with failure context', async () => {
setupRawStdin(toRawInputs([
'what was the error?',
'can you suggest a fix?',
'/go',
]));
const capture = setupProvider([
'The error was a timeout in the review step.',
'You could increase the timeout limit or optimize the review.',
'Increase review timeout to 600s and add retry logic.',
]);
const retryContext: RetryContext = {
failure: {
taskName: 'optimize-review',
createdAt: '2026-02-15T18:00:00Z',
failedMovement: 'review',
error: 'Timeout',
lastMessage: '',
retryNote: '',
},
branchName: 'takt/optimize-review',
pieceContext: {
name: 'default',
description: '',
pieceStructure: '',
movementPreviews: [],
},
run: null,
};
const result = await runRetryMode(tmpDir, retryContext);
expect(result.action).toBe('execute');
expect(result.task).toBe('Increase review timeout to 600s and add retry logic.');
expect(capture.callCount).toBe(3);
});
});

View File

@ -0,0 +1,297 @@
/**
* E2E test: Run session loading interactive instruct mode prompt injection.
*
* Simulates the full interactive flow:
* 1. Create .takt/runs/ fixtures on real file system
* 2. Load run session with real listRecentRuns / loadRunSessionContext
* 3. Run instruct mode with stdin simulation (user types message /go)
* 4. Mock provider captures the system prompt sent to AI
* 5. Verify run session data appears in the system prompt
*
* Real: listRecentRuns, loadRunSessionContext, formatRunSessionForPrompt,
* loadTemplate, runConversationLoop (actual conversation loop)
* Mocked: provider (captures system prompt), config, UI, session persistence
*/
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { mkdirSync, writeFileSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
import {
setupRawStdin,
restoreStdin,
toRawInputs,
createMockProvider,
type MockProviderCapture,
} from './helpers/stdinSimulator.js';
// --- Mocks (infrastructure only, not core logic) ---
vi.mock('../infra/fs/session.js', () => ({
loadNdjsonLog: vi.fn(),
}));
vi.mock('../infra/config/global/globalConfig.js', () => ({
loadGlobalConfig: vi.fn(() => ({ provider: 'mock', language: 'en' })),
getBuiltinPiecesEnabled: vi.fn().mockReturnValue(true),
}));
vi.mock('../infra/providers/index.js', () => ({
getProvider: vi.fn(),
}));
vi.mock('../shared/utils/index.js', async (importOriginal) => ({
...(await importOriginal<Record<string, unknown>>()),
createLogger: () => ({
info: vi.fn(),
debug: vi.fn(),
error: vi.fn(),
}),
}));
vi.mock('../shared/context.js', () => ({
isQuietMode: vi.fn(() => false),
}));
vi.mock('../infra/config/paths.js', async (importOriginal) => ({
...(await importOriginal<Record<string, unknown>>()),
loadPersonaSessions: vi.fn(() => ({})),
updatePersonaSession: vi.fn(),
getProjectConfigDir: vi.fn(() => '/tmp'),
loadSessionState: vi.fn(() => null),
clearSessionState: vi.fn(),
}));
vi.mock('../shared/ui/index.js', () => ({
info: vi.fn(),
error: vi.fn(),
blankLine: vi.fn(),
StreamDisplay: vi.fn().mockImplementation(() => ({
createHandler: vi.fn(() => vi.fn()),
flush: vi.fn(),
})),
}));
vi.mock('../shared/prompt/index.js', () => ({
selectOption: vi.fn().mockResolvedValue('execute'),
}));
vi.mock('../shared/i18n/index.js', () => ({
getLabel: vi.fn((_key: string, _lang: string) => 'Mock label'),
getLabelObject: vi.fn(() => ({
intro: 'Instruct intro',
resume: 'Resume',
noConversation: 'No conversation',
summarizeFailed: 'Summarize failed',
continuePrompt: 'Continue?',
proposed: 'Proposed:',
actionPrompt: 'What next?',
playNoTask: 'No task',
cancelled: 'Cancelled',
actions: { execute: 'Execute', saveTask: 'Save', continue: 'Continue' },
})),
}));
// --- Imports (after mocks) ---
import { getProvider } from '../infra/providers/index.js';
import { loadNdjsonLog } from '../infra/fs/session.js';
import {
listRecentRuns,
loadRunSessionContext,
} from '../features/interactive/runSessionReader.js';
import { runInstructMode } from '../features/tasks/list/instructMode.js';
const mockGetProvider = vi.mocked(getProvider);
const mockLoadNdjsonLog = vi.mocked(loadNdjsonLog);
// --- Fixture helpers ---
function createTmpDir(): string {
const dir = join(tmpdir(), `takt-e2e-${Date.now()}-${Math.random().toString(36).slice(2)}`);
mkdirSync(dir, { recursive: true });
return dir;
}
function createRunFixture(
cwd: string,
slug: string,
overrides?: {
meta?: Record<string, unknown>;
reports?: Array<{ name: string; content: string }>;
emptyMeta?: boolean;
corruptMeta?: boolean;
},
): void {
const runDir = join(cwd, '.takt', 'runs', slug);
mkdirSync(join(runDir, 'logs'), { recursive: true });
mkdirSync(join(runDir, 'reports'), { recursive: true });
if (overrides?.emptyMeta) {
writeFileSync(join(runDir, 'meta.json'), '', 'utf-8');
} else if (overrides?.corruptMeta) {
writeFileSync(join(runDir, 'meta.json'), '{ broken json', 'utf-8');
} else {
const meta = {
task: `Task for ${slug}`,
piece: 'default',
status: 'completed',
startTime: '2026-02-01T00:00:00.000Z',
logsDirectory: `.takt/runs/${slug}/logs`,
reportDirectory: `.takt/runs/${slug}/reports`,
runSlug: slug,
...overrides?.meta,
};
writeFileSync(join(runDir, 'meta.json'), JSON.stringify(meta), 'utf-8');
}
writeFileSync(join(runDir, 'logs', 'session-001.jsonl'), '{}', 'utf-8');
for (const report of overrides?.reports ?? []) {
writeFileSync(join(runDir, 'reports', report.name), report.content, 'utf-8');
}
}
function setupMockNdjsonLog(history: Array<{ step: string; persona: string; status: string; content: string }>): void {
mockLoadNdjsonLog.mockReturnValue({
task: 'mock',
projectDir: '',
pieceName: 'default',
iterations: history.length,
startTime: '2026-02-01T00:00:00.000Z',
status: 'completed',
history: history.map((h) => ({
...h,
instruction: '',
timestamp: '2026-02-01T00:00:00.000Z',
})),
});
}
function setupProvider(responses: string[]): MockProviderCapture {
const { provider, capture } = createMockProvider(responses);
mockGetProvider.mockReturnValue(provider);
return capture;
}
// --- Tests ---
describe('E2E: Run session → instruct mode with interactive flow', () => {
let tmpDir: string;
beforeEach(() => {
tmpDir = createTmpDir();
vi.clearAllMocks();
});
afterEach(() => {
restoreStdin();
rmSync(tmpDir, { recursive: true, force: true });
});
it('should inject run session data into system prompt during interactive conversation', async () => {
// Fixture: run with movement logs and reports
createRunFixture(tmpDir, 'run-auth', {
meta: { task: 'Implement JWT auth' },
reports: [
{ name: '00-plan.md', content: '# Plan\n\nJWT auth with refresh tokens.' },
],
});
setupMockNdjsonLog([
{ step: 'plan', persona: 'architect', status: 'completed', content: 'Planned JWT auth flow' },
{ step: 'implement', persona: 'coder', status: 'completed', content: 'Created auth middleware' },
]);
// Load run session (real code)
const context = loadRunSessionContext(tmpDir, 'run-auth');
// Simulate: user types "fix the token expiry" → /go → AI summarizes → user selects execute
setupRawStdin(toRawInputs(['fix the token expiry', '/go']));
const capture = setupProvider(['Sure, I can help with that.', 'Fix token expiry handling in auth middleware.']);
const result = await runInstructMode(
tmpDir,
'## Branch: takt/fix-auth\n',
'takt/fix-auth',
'fix-auth',
'Implement JWT auth',
'',
{ name: 'default', description: '', pieceStructure: '', movementPreviews: [] },
context,
);
// Verify: system prompt contains run session data
expect(capture.systemPrompts.length).toBeGreaterThan(0);
const systemPrompt = capture.systemPrompts[0]!;
expect(systemPrompt).toContain('Previous Run Reference');
expect(systemPrompt).toContain('Implement JWT auth');
expect(systemPrompt).toContain('Planned JWT auth flow');
expect(systemPrompt).toContain('Created auth middleware');
expect(systemPrompt).toContain('00-plan.md');
expect(systemPrompt).toContain('JWT auth with refresh tokens');
// Verify: interactive flow completed with execute action
expect(result.action).toBe('execute');
expect(result.task).toBe('Fix token expiry handling in auth middleware.');
// Verify: AI was called twice (user message + /go summary)
expect(capture.callCount).toBe(2);
});
it('should produce system prompt without run section when no context', async () => {
setupRawStdin(toRawInputs(['/cancel']));
setupProvider([]);
const result = await runInstructMode(tmpDir, '', 'takt/fix', 'fix', '', '', undefined, undefined);
expect(result.action).toBe('cancel');
});
it('should cancel cleanly mid-conversation with run session', async () => {
createRunFixture(tmpDir, 'run-1');
setupMockNdjsonLog([]);
const context = loadRunSessionContext(tmpDir, 'run-1');
setupRawStdin(toRawInputs(['some thought', '/cancel']));
const capture = setupProvider(['I understand.']);
const result = await runInstructMode(
tmpDir, '', 'takt/branch', 'branch', '', '', undefined, context,
);
expect(result.action).toBe('cancel');
// AI was called once for "some thought", then /cancel exits
expect(capture.callCount).toBe(1);
});
it('should skip empty and corrupt meta.json in listRecentRuns', () => {
createRunFixture(tmpDir, 'valid-run');
createRunFixture(tmpDir, 'empty-meta', { emptyMeta: true });
createRunFixture(tmpDir, 'corrupt-meta', { corruptMeta: true });
const runs = listRecentRuns(tmpDir);
expect(runs).toHaveLength(1);
expect(runs[0]!.slug).toBe('valid-run');
});
it('should sort runs by startTime descending', () => {
createRunFixture(tmpDir, 'old', { meta: { startTime: '2026-01-01T00:00:00Z' } });
createRunFixture(tmpDir, 'new', { meta: { startTime: '2026-02-15T00:00:00Z' } });
const runs = listRecentRuns(tmpDir);
expect(runs[0]!.slug).toBe('new');
expect(runs[1]!.slug).toBe('old');
});
it('should truncate long movement content to 500 chars', () => {
createRunFixture(tmpDir, 'long');
setupMockNdjsonLog([
{ step: 'implement', persona: 'coder', status: 'completed', content: 'X'.repeat(800) },
]);
const context = loadRunSessionContext(tmpDir, 'long');
expect(context.movementLogs[0]!.content.length).toBe(501);
expect(context.movementLogs[0]!.content.endsWith('…')).toBe(true);
});
});

View File

@ -35,6 +35,16 @@ describe('loadTemplate', () => {
expect(result).toContain('対話モードポリシー');
});
it('loads an English retry system prompt template', () => {
const result = loadTemplate('score_retry_system_prompt', 'en');
expect(result).toContain('Retry Assistant');
});
it('loads a Japanese retry system prompt template', () => {
const result = loadTemplate('score_retry_system_prompt', 'ja');
expect(result).toContain('リトライアシスタント');
});
it('loads score_slug_system_prompt with explicit lang', () => {
const result = loadTemplate('score_slug_system_prompt', 'en');
expect(result).toContain('You are a slug generator');
@ -58,6 +68,19 @@ describe('variable substitution', () => {
expect(result).toContain('You are the agent');
});
it('replaces taskHistory variable in score_summary_system_prompt', () => {
const result = loadTemplate('score_summary_system_prompt', 'en', {
pieceInfo: true,
pieceName: 'piece',
pieceDescription: 'desc',
movementDetails: '',
conversation: 'Conversation: User: test',
taskHistory: '## Task execution history\n- Worktree ID: wt-1',
});
expect(result).toContain('## Task execution history');
expect(result).toContain('Worktree ID: wt-1');
});
it('replaces multiple different variables', () => {
const result = loadTemplate('perform_judge_message', 'en', {
agentOutput: 'test output',

View File

@ -0,0 +1,147 @@
/**
* Unit tests for retryMode: buildRetryTemplateVars
*/
import { describe, it, expect } from 'vitest';
import { buildRetryTemplateVars, type RetryContext } from '../features/interactive/retryMode.js';
function createRetryContext(overrides?: Partial<RetryContext>): RetryContext {
return {
failure: {
taskName: 'my-task',
createdAt: '2026-02-15T10:00:00Z',
failedMovement: 'review',
error: 'Timeout',
lastMessage: 'Agent stopped',
retryNote: '',
},
branchName: 'takt/my-task',
pieceContext: {
name: 'default',
description: '',
pieceStructure: '1. plan → 2. implement → 3. review',
movementPreviews: [],
},
run: null,
...overrides,
};
}
describe('buildRetryTemplateVars', () => {
it('should map failure info to template variables', () => {
const ctx = createRetryContext();
const vars = buildRetryTemplateVars(ctx, 'en');
expect(vars.taskName).toBe('my-task');
expect(vars.branchName).toBe('takt/my-task');
expect(vars.createdAt).toBe('2026-02-15T10:00:00Z');
expect(vars.failedMovement).toBe('review');
expect(vars.failureError).toBe('Timeout');
expect(vars.failureLastMessage).toBe('Agent stopped');
});
it('should set empty string for absent optional fields', () => {
const ctx = createRetryContext({
failure: {
taskName: 'task',
createdAt: '2026-01-01T00:00:00Z',
failedMovement: '',
error: 'Error',
lastMessage: '',
retryNote: '',
},
});
const vars = buildRetryTemplateVars(ctx, 'en');
expect(vars.failedMovement).toBe('');
expect(vars.failureLastMessage).toBe('');
expect(vars.retryNote).toBe('');
});
it('should set hasRun=false and empty run vars when run is null', () => {
const ctx = createRetryContext({ run: null });
const vars = buildRetryTemplateVars(ctx, 'en');
expect(vars.hasRun).toBe(false);
expect(vars.runLogsDir).toBe('');
expect(vars.runReportsDir).toBe('');
expect(vars.runTask).toBe('');
expect(vars.runPiece).toBe('');
expect(vars.runStatus).toBe('');
expect(vars.runMovementLogs).toBe('');
expect(vars.runReports).toBe('');
});
it('should set hasRun=true and populate run vars when run is provided', () => {
const ctx = createRetryContext({
run: {
logsDir: '/project/.takt/runs/slug/logs',
reportsDir: '/project/.takt/runs/slug/reports',
task: 'Build feature',
piece: 'default',
status: 'failed',
movementLogs: '### plan\nPlanned.',
reports: '### 00-plan.md\n# Plan',
},
});
const vars = buildRetryTemplateVars(ctx, 'en');
expect(vars.hasRun).toBe(true);
expect(vars.runLogsDir).toBe('/project/.takt/runs/slug/logs');
expect(vars.runReportsDir).toBe('/project/.takt/runs/slug/reports');
expect(vars.runTask).toBe('Build feature');
expect(vars.runPiece).toBe('default');
expect(vars.runStatus).toBe('failed');
expect(vars.runMovementLogs).toBe('### plan\nPlanned.');
expect(vars.runReports).toBe('### 00-plan.md\n# Plan');
});
it('should set hasPiecePreview=false when no movement previews', () => {
const ctx = createRetryContext();
const vars = buildRetryTemplateVars(ctx, 'en');
expect(vars.hasPiecePreview).toBe(false);
expect(vars.movementDetails).toBe('');
});
it('should set hasPiecePreview=true and format movement details when previews exist', () => {
const ctx = createRetryContext({
pieceContext: {
name: 'default',
description: '',
pieceStructure: '1. plan',
movementPreviews: [
{
name: 'plan',
personaDisplayName: 'Architect',
personaContent: 'You are an architect.',
instructionContent: 'Plan the feature.',
allowedTools: ['Read', 'Grep'],
canEdit: false,
},
],
},
});
const vars = buildRetryTemplateVars(ctx, 'en');
expect(vars.hasPiecePreview).toBe(true);
expect(vars.movementDetails).toContain('plan');
expect(vars.movementDetails).toContain('Architect');
});
it('should include retryNote when present', () => {
const ctx = createRetryContext({
failure: {
taskName: 'task',
createdAt: '2026-01-01T00:00:00Z',
failedMovement: '',
error: 'Error',
lastMessage: '',
retryNote: 'Added more specific error handling',
},
});
const vars = buildRetryTemplateVars(ctx, 'en');
expect(vars.retryNote).toBe('Added more specific error handling');
});
});

View File

@ -0,0 +1,91 @@
/**
* Tests for runSelector
*/
import { describe, it, expect, vi, beforeEach } from 'vitest';
vi.mock('../shared/prompt/index.js', () => ({
selectOption: vi.fn(),
}));
vi.mock('../shared/i18n/index.js', () => ({
getLabel: vi.fn((key: string) => key),
}));
vi.mock('../shared/ui/index.js', () => ({
info: vi.fn(),
}));
vi.mock('../features/interactive/runSessionReader.js', () => ({
listRecentRuns: vi.fn(),
}));
import { selectOption } from '../shared/prompt/index.js';
import { info } from '../shared/ui/index.js';
import { listRecentRuns } from '../features/interactive/runSessionReader.js';
import { selectRun } from '../features/interactive/runSelector.js';
const mockListRecentRuns = vi.mocked(listRecentRuns);
const mockSelectOption = vi.mocked(selectOption);
const mockInfo = vi.mocked(info);
describe('selectRun', () => {
beforeEach(() => {
vi.clearAllMocks();
});
it('should return null and show message when no runs exist', async () => {
mockListRecentRuns.mockReturnValue([]);
const result = await selectRun('/some/path', 'en');
expect(result).toBeNull();
expect(mockInfo).toHaveBeenCalledWith('interactive.runSelector.noRuns');
});
it('should present run options and return selected slug', async () => {
mockListRecentRuns.mockReturnValue([
{ slug: 'run-1', task: 'First task', piece: 'default', status: 'completed', startTime: '2026-02-01T10:00:00Z' },
{ slug: 'run-2', task: 'Second task', piece: 'custom', status: 'aborted', startTime: '2026-01-15T08:00:00Z' },
]);
mockSelectOption.mockResolvedValue('run-1');
const result = await selectRun('/some/path', 'en');
expect(result).toBe('run-1');
expect(mockSelectOption).toHaveBeenCalledTimes(1);
const callArgs = mockSelectOption.mock.calls[0];
expect(callArgs[0]).toBe('interactive.runSelector.prompt');
const options = callArgs[1];
expect(options).toHaveLength(2);
expect(options[0].value).toBe('run-1');
expect(options[0].label).toBe('First task');
expect(options[1].value).toBe('run-2');
expect(options[1].label).toBe('Second task');
});
it('should return null when user cancels selection', async () => {
mockListRecentRuns.mockReturnValue([
{ slug: 'run-1', task: 'Task', piece: 'default', status: 'completed', startTime: '2026-02-01T00:00:00Z' },
]);
mockSelectOption.mockResolvedValue(null);
const result = await selectRun('/some/path', 'en');
expect(result).toBeNull();
});
it('should truncate long task labels', async () => {
const longTask = 'A'.repeat(100);
mockListRecentRuns.mockReturnValue([
{ slug: 'run-1', task: longTask, piece: 'default', status: 'completed', startTime: '2026-02-01T00:00:00Z' },
]);
mockSelectOption.mockResolvedValue('run-1');
await selectRun('/some/path', 'en');
const options = mockSelectOption.mock.calls[0][1];
expect(options[0].label.length).toBeLessThanOrEqual(61); // 60 + '…'
});
});

View File

@ -0,0 +1,370 @@
/**
* Tests for runSessionReader
*/
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { mkdirSync, writeFileSync, rmSync } from 'node:fs';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
vi.mock('../infra/fs/session.js', () => ({
loadNdjsonLog: vi.fn(),
}));
import { loadNdjsonLog } from '../infra/fs/session.js';
import {
listRecentRuns,
findRunForTask,
loadRunSessionContext,
formatRunSessionForPrompt,
type RunSessionContext,
} from '../features/interactive/runSessionReader.js';
const mockLoadNdjsonLog = vi.mocked(loadNdjsonLog);
function createTmpDir(): string {
const dir = join(tmpdir(), `takt-test-runSessionReader-${Date.now()}-${Math.random().toString(36).slice(2)}`);
mkdirSync(dir, { recursive: true });
return dir;
}
function createRunDir(
cwd: string,
slug: string,
meta: Record<string, unknown>,
): string {
const runDir = join(cwd, '.takt', 'runs', slug);
mkdirSync(join(runDir, 'logs'), { recursive: true });
mkdirSync(join(runDir, 'reports'), { recursive: true });
writeFileSync(join(runDir, 'meta.json'), JSON.stringify(meta), 'utf-8');
return runDir;
}
describe('listRecentRuns', () => {
let tmpDir: string;
beforeEach(() => {
tmpDir = createTmpDir();
vi.clearAllMocks();
});
it('should return empty array when .takt/runs does not exist', () => {
const result = listRecentRuns(tmpDir);
expect(result).toEqual([]);
});
it('should return empty array when no runs have meta.json', () => {
mkdirSync(join(tmpDir, '.takt', 'runs', 'empty-run'), { recursive: true });
const result = listRecentRuns(tmpDir);
expect(result).toEqual([]);
});
it('should return runs sorted by startTime descending', () => {
createRunDir(tmpDir, 'run-old', {
task: 'Old task',
piece: 'default',
status: 'completed',
startTime: '2026-01-01T00:00:00.000Z',
logsDirectory: '.takt/runs/run-old/logs',
reportDirectory: '.takt/runs/run-old/reports',
runSlug: 'run-old',
});
createRunDir(tmpDir, 'run-new', {
task: 'New task',
piece: 'custom',
status: 'running',
startTime: '2026-02-01T00:00:00.000Z',
logsDirectory: '.takt/runs/run-new/logs',
reportDirectory: '.takt/runs/run-new/reports',
runSlug: 'run-new',
});
const result = listRecentRuns(tmpDir);
expect(result).toHaveLength(2);
expect(result[0].slug).toBe('run-new');
expect(result[1].slug).toBe('run-old');
});
it('should limit results to 10', () => {
for (let i = 0; i < 12; i++) {
const slug = `run-${String(i).padStart(2, '0')}`;
createRunDir(tmpDir, slug, {
task: `Task ${i}`,
piece: 'default',
status: 'completed',
startTime: `2026-01-${String(i + 1).padStart(2, '0')}T00:00:00.000Z`,
logsDirectory: `.takt/runs/${slug}/logs`,
reportDirectory: `.takt/runs/${slug}/reports`,
runSlug: slug,
});
}
const result = listRecentRuns(tmpDir);
expect(result).toHaveLength(10);
});
afterEach(() => {
rmSync(tmpDir, { recursive: true, force: true });
});
});
describe('findRunForTask', () => {
let tmpDir: string;
beforeEach(() => {
tmpDir = createTmpDir();
vi.clearAllMocks();
});
afterEach(() => {
rmSync(tmpDir, { recursive: true, force: true });
});
it('should return null when no runs exist', () => {
const result = findRunForTask(tmpDir, 'Some task');
expect(result).toBeNull();
});
it('should return null when no runs match the task content', () => {
createRunDir(tmpDir, 'run-other', {
task: 'Different task',
piece: 'default',
status: 'completed',
startTime: '2026-02-01T00:00:00.000Z',
logsDirectory: '.takt/runs/run-other/logs',
reportDirectory: '.takt/runs/run-other/reports',
runSlug: 'run-other',
});
const result = findRunForTask(tmpDir, 'My specific task');
expect(result).toBeNull();
});
it('should return the matching run slug', () => {
createRunDir(tmpDir, 'run-match', {
task: 'Build login page',
piece: 'default',
status: 'failed',
startTime: '2026-02-01T00:00:00.000Z',
logsDirectory: '.takt/runs/run-match/logs',
reportDirectory: '.takt/runs/run-match/reports',
runSlug: 'run-match',
});
const result = findRunForTask(tmpDir, 'Build login page');
expect(result).toBe('run-match');
});
it('should return the most recent matching run when multiple exist', () => {
createRunDir(tmpDir, 'run-old', {
task: 'Build login page',
piece: 'default',
status: 'failed',
startTime: '2026-01-01T00:00:00.000Z',
logsDirectory: '.takt/runs/run-old/logs',
reportDirectory: '.takt/runs/run-old/reports',
runSlug: 'run-old',
});
createRunDir(tmpDir, 'run-new', {
task: 'Build login page',
piece: 'default',
status: 'failed',
startTime: '2026-02-01T00:00:00.000Z',
logsDirectory: '.takt/runs/run-new/logs',
reportDirectory: '.takt/runs/run-new/reports',
runSlug: 'run-new',
});
const result = findRunForTask(tmpDir, 'Build login page');
expect(result).toBe('run-new');
});
});
describe('loadRunSessionContext', () => {
let tmpDir: string;
beforeEach(() => {
tmpDir = createTmpDir();
vi.clearAllMocks();
});
it('should throw when run does not exist', () => {
expect(() => loadRunSessionContext(tmpDir, 'nonexistent')).toThrow('Run not found: nonexistent');
});
it('should load context with movement logs and reports', () => {
const slug = 'test-run';
const runDir = createRunDir(tmpDir, slug, {
task: 'Test task',
piece: 'default',
status: 'completed',
startTime: '2026-02-01T00:00:00.000Z',
logsDirectory: `.takt/runs/${slug}/logs`,
reportDirectory: `.takt/runs/${slug}/reports`,
runSlug: slug,
});
// Create a log file
writeFileSync(join(runDir, 'logs', 'session-001.jsonl'), '{}', 'utf-8');
// Create a report file
writeFileSync(join(runDir, 'reports', '00-plan.md'), '# Plan\nDetails here', 'utf-8');
mockLoadNdjsonLog.mockReturnValue({
task: 'Test task',
projectDir: '',
pieceName: 'default',
iterations: 1,
startTime: '2026-02-01T00:00:00.000Z',
status: 'completed',
history: [
{
step: 'implement',
persona: 'coder',
instruction: 'Implement feature',
status: 'completed',
timestamp: '2026-02-01T00:01:00.000Z',
content: 'Implementation done',
},
],
});
const context = loadRunSessionContext(tmpDir, slug);
expect(context.task).toBe('Test task');
expect(context.piece).toBe('default');
expect(context.status).toBe('completed');
expect(context.movementLogs).toHaveLength(1);
expect(context.movementLogs[0].step).toBe('implement');
expect(context.movementLogs[0].content).toBe('Implementation done');
expect(context.reports).toHaveLength(1);
expect(context.reports[0].filename).toBe('00-plan.md');
});
it('should truncate movement content to 500 characters', () => {
const slug = 'truncate-run';
const runDir = createRunDir(tmpDir, slug, {
task: 'Truncate test',
piece: 'default',
status: 'completed',
startTime: '2026-02-01T00:00:00.000Z',
logsDirectory: `.takt/runs/${slug}/logs`,
reportDirectory: `.takt/runs/${slug}/reports`,
runSlug: slug,
});
writeFileSync(join(runDir, 'logs', 'session-001.jsonl'), '{}', 'utf-8');
const longContent = 'A'.repeat(600);
mockLoadNdjsonLog.mockReturnValue({
task: 'Truncate test',
projectDir: '',
pieceName: 'default',
iterations: 1,
startTime: '2026-02-01T00:00:00.000Z',
status: 'completed',
history: [
{
step: 'implement',
persona: 'coder',
instruction: 'Do it',
status: 'completed',
timestamp: '2026-02-01T00:01:00.000Z',
content: longContent,
},
],
});
const context = loadRunSessionContext(tmpDir, slug);
expect(context.movementLogs[0].content.length).toBe(501); // 500 + '…'
expect(context.movementLogs[0].content.endsWith('…')).toBe(true);
});
it('should handle missing log files gracefully', () => {
const slug = 'no-logs-run';
createRunDir(tmpDir, slug, {
task: 'No logs',
piece: 'default',
status: 'completed',
startTime: '2026-02-01T00:00:00.000Z',
logsDirectory: `.takt/runs/${slug}/logs`,
reportDirectory: `.takt/runs/${slug}/reports`,
runSlug: slug,
});
const context = loadRunSessionContext(tmpDir, slug);
expect(context.movementLogs).toEqual([]);
expect(context.reports).toEqual([]);
});
it('should exclude provider-events log files', () => {
const slug = 'provider-events-run';
const runDir = createRunDir(tmpDir, slug, {
task: 'Provider events test',
piece: 'default',
status: 'completed',
startTime: '2026-02-01T00:00:00.000Z',
logsDirectory: `.takt/runs/${slug}/logs`,
reportDirectory: `.takt/runs/${slug}/reports`,
runSlug: slug,
});
// Only provider-events log file
writeFileSync(join(runDir, 'logs', 'session-001-provider-events.jsonl'), '{}', 'utf-8');
const context = loadRunSessionContext(tmpDir, slug);
expect(mockLoadNdjsonLog).not.toHaveBeenCalled();
expect(context.movementLogs).toEqual([]);
});
afterEach(() => {
rmSync(tmpDir, { recursive: true, force: true });
});
});
describe('formatRunSessionForPrompt', () => {
it('should format context into prompt variables', () => {
const ctx: RunSessionContext = {
task: 'Implement feature X',
piece: 'default',
status: 'completed',
movementLogs: [
{ step: 'plan', persona: 'architect', status: 'completed', content: 'Plan content' },
{ step: 'implement', persona: 'coder', status: 'completed', content: 'Code content' },
],
reports: [
{ filename: '00-plan.md', content: '# Plan\nDetails' },
],
};
const result = formatRunSessionForPrompt(ctx);
expect(result.runTask).toBe('Implement feature X');
expect(result.runPiece).toBe('default');
expect(result.runStatus).toBe('completed');
expect(result.runMovementLogs).toContain('plan');
expect(result.runMovementLogs).toContain('architect');
expect(result.runMovementLogs).toContain('Plan content');
expect(result.runMovementLogs).toContain('implement');
expect(result.runMovementLogs).toContain('Code content');
expect(result.runReports).toContain('00-plan.md');
expect(result.runReports).toContain('# Plan\nDetails');
});
it('should handle empty logs and reports', () => {
const ctx: RunSessionContext = {
task: 'Empty task',
piece: 'default',
status: 'aborted',
movementLogs: [],
reports: [],
};
const result = formatRunSessionForPrompt(ctx);
expect(result.runTask).toBe('Empty task');
expect(result.runMovementLogs).toBe('');
expect(result.runReports).toBe('');
});
});

View File

@ -105,8 +105,7 @@ describe('saveTaskFile', () => {
});
describe('saveTaskFromInteractive', () => {
it('should save task with worktree settings when user confirms', async () => {
mockConfirm.mockResolvedValueOnce(true);
it('should always save task with worktree settings', async () => {
mockPromptInput.mockResolvedValueOnce('');
mockPromptInput.mockResolvedValueOnce('');
mockConfirm.mockResolvedValueOnce(true);
@ -119,18 +118,22 @@ describe('saveTaskFromInteractive', () => {
expect(task.auto_pr).toBe(true);
});
it('should save task without worktree settings when declined', async () => {
it('should keep worktree enabled even when auto-pr is declined', async () => {
mockPromptInput.mockResolvedValueOnce('');
mockPromptInput.mockResolvedValueOnce('');
mockConfirm.mockResolvedValueOnce(false);
await saveTaskFromInteractive(testDir, 'Task content');
const task = loadTasks(testDir).tasks[0]!;
expect(task.worktree).toBeUndefined();
expect(task.worktree).toBe(true);
expect(task.branch).toBeUndefined();
expect(task.auto_pr).toBeUndefined();
expect(task.auto_pr).toBe(false);
});
it('should display piece info when specified', async () => {
mockPromptInput.mockResolvedValueOnce('');
mockPromptInput.mockResolvedValueOnce('');
mockConfirm.mockResolvedValueOnce(false);
await saveTaskFromInteractive(testDir, 'Task content', 'review');
@ -139,6 +142,8 @@ describe('saveTaskFromInteractive', () => {
});
it('should record issue number in tasks.yaml when issue option is provided', async () => {
mockPromptInput.mockResolvedValueOnce('');
mockPromptInput.mockResolvedValueOnce('');
mockConfirm.mockResolvedValueOnce(false);
await saveTaskFromInteractive(testDir, 'Fix login bug', 'default', { issue: 42 });
@ -163,7 +168,6 @@ describe('saveTaskFromInteractive', () => {
mockConfirm.mockResolvedValueOnce(true);
mockPromptInput.mockResolvedValueOnce('');
mockPromptInput.mockResolvedValueOnce('');
mockConfirm.mockResolvedValueOnce(true);
mockConfirm.mockResolvedValueOnce(false);
await saveTaskFromInteractive(testDir, 'Task content', 'default', {
@ -172,7 +176,7 @@ describe('saveTaskFromInteractive', () => {
});
expect(mockConfirm).toHaveBeenNthCalledWith(1, 'Add this issue to tasks?', true);
expect(mockConfirm).toHaveBeenNthCalledWith(2, 'Create worktree?', true);
expect(mockConfirm).toHaveBeenNthCalledWith(2, 'Auto-create PR?', true);
const task = loadTasks(testDir).tasks[0]!;
expect(task.issue).toBe(42);
expect(task.worktree).toBe(true);

View File

@ -0,0 +1,60 @@
/**
* Tests for selector shared utilities
*/
import { describe, it, expect } from 'vitest';
import { truncateForLabel, formatDateForSelector } from '../features/interactive/selectorUtils.js';
describe('truncateForLabel', () => {
it('should return text as-is when within max length', () => {
const result = truncateForLabel('Short text', 20);
expect(result).toBe('Short text');
});
it('should truncate text exceeding max length with ellipsis', () => {
const longText = 'A'.repeat(100);
const result = truncateForLabel(longText, 60);
expect(result).toHaveLength(61); // 60 + '…'
expect(result).toBe('A'.repeat(60) + '…');
});
it('should replace newlines with spaces', () => {
const result = truncateForLabel('Line one\nLine two\nLine three', 50);
expect(result).toBe('Line one Line two Line three');
expect(result).not.toContain('\n');
});
it('should trim surrounding whitespace', () => {
const result = truncateForLabel(' padded text ', 50);
expect(result).toBe('padded text');
});
it('should handle text exactly at max length', () => {
const exactText = 'A'.repeat(60);
const result = truncateForLabel(exactText, 60);
expect(result).toBe(exactText);
});
});
describe('formatDateForSelector', () => {
it('should format date for English locale', () => {
const result = formatDateForSelector('2026-02-01T10:30:00Z', 'en');
expect(result).toBeTruthy();
expect(typeof result).toBe('string');
});
it('should format date for Japanese locale', () => {
const result = formatDateForSelector('2026-02-01T10:30:00Z', 'ja');
expect(result).toBeTruthy();
expect(typeof result).toBe('string');
});
});

View File

@ -0,0 +1,58 @@
import { beforeEach, describe, expect, it, vi } from 'vitest';
import { isProcessAlive, isStaleRunningTask } from '../infra/task/process.js';
beforeEach(() => {
vi.restoreAllMocks();
});
describe('process alive utility', () => {
it('returns true when process id exists', () => {
const mockKill = vi.spyOn(process, 'kill').mockImplementation(() => true);
const result = isProcessAlive(process.pid);
expect(mockKill).toHaveBeenCalledWith(process.pid, 0);
expect(result).toBe(true);
});
it('returns false when process does not exist', () => {
vi.spyOn(process, 'kill').mockImplementation(() => {
const error = new Error('No such process') as NodeJS.ErrnoException;
error.code = 'ESRCH';
throw error;
});
expect(isProcessAlive(99999)).toBe(false);
});
it('treats permission errors as alive', () => {
vi.spyOn(process, 'kill').mockImplementation(() => {
const error = new Error('Permission denied') as NodeJS.ErrnoException;
error.code = 'EPERM';
throw error;
});
expect(isProcessAlive(99999)).toBe(true);
});
it('throws for unexpected process errors', () => {
vi.spyOn(process, 'kill').mockImplementation(() => {
const error = new Error('Unknown') as NodeJS.ErrnoException;
error.code = 'EINVAL';
throw error;
});
expect(() => isProcessAlive(99999)).toThrow('Unknown');
});
it('returns true when stale check receives a live process id', () => {
vi.spyOn(process, 'kill').mockImplementation(() => true);
expect(isStaleRunningTask(process.pid)).toBe(false);
});
it('returns true when stale check has no process id', () => {
expect(isStaleRunningTask(undefined)).toBe(true);
});
});

View File

@ -1,51 +1,54 @@
import { beforeEach, describe, expect, it, vi } from 'vitest';
const {
mockAddTask,
mockCompleteTask,
mockFailTask,
mockExecuteTask,
mockExistsSync,
mockStartReExecution,
mockRequeueTask,
mockExecuteAndCompleteTask,
mockRunInstructMode,
mockDispatchConversationAction,
mockSelectPiece,
mockConfirm,
mockGetLabel,
mockResolveLanguage,
mockListRecentRuns,
mockSelectRun,
mockLoadRunSessionContext,
} = vi.hoisted(() => ({
mockAddTask: vi.fn(() => ({
name: 'instruction-task',
content: 'instruction',
filePath: '/project/.takt/tasks.yaml',
createdAt: '2026-02-14T00:00:00.000Z',
status: 'pending',
data: { task: 'instruction' },
})),
mockCompleteTask: vi.fn(),
mockFailTask: vi.fn(),
mockExecuteTask: vi.fn(),
mockExistsSync: vi.fn(() => true),
mockStartReExecution: vi.fn(),
mockRequeueTask: vi.fn(),
mockExecuteAndCompleteTask: vi.fn(),
mockRunInstructMode: vi.fn(),
mockDispatchConversationAction: vi.fn(),
mockSelectPiece: vi.fn(),
mockConfirm: vi.fn(),
mockGetLabel: vi.fn(),
mockResolveLanguage: vi.fn(() => 'en'),
mockListRecentRuns: vi.fn(() => []),
mockSelectRun: vi.fn(() => null),
mockLoadRunSessionContext: vi.fn(),
}));
vi.mock('node:fs', async (importOriginal) => ({
...(await importOriginal<Record<string, unknown>>()),
existsSync: (...args: unknown[]) => mockExistsSync(...args),
}));
vi.mock('../infra/task/index.js', () => ({
createTempCloneForBranch: vi.fn(() => ({ path: '/tmp/clone', branch: 'takt/sample' })),
removeClone: vi.fn(),
removeCloneMeta: vi.fn(),
detectDefaultBranch: vi.fn(() => 'main'),
autoCommitAndPush: vi.fn(() => ({ success: false, message: 'no changes' })),
TaskRunner: class {
addTask(...args: unknown[]) {
return mockAddTask(...args);
startReExecution(...args: unknown[]) {
return mockStartReExecution(...args);
}
completeTask(...args: unknown[]) {
return mockCompleteTask(...args);
}
failTask(...args: unknown[]) {
return mockFailTask(...args);
requeueTask(...args: unknown[]) {
return mockRequeueTask(...args);
}
},
}));
vi.mock('../infra/config/index.js', () => ({
loadGlobalConfig: vi.fn(() => ({ interactivePreviewMovements: false })),
loadGlobalConfig: vi.fn(() => ({ interactivePreviewMovements: 3, language: 'en' })),
getPieceDescription: vi.fn(() => ({
name: 'default',
description: 'desc',
@ -54,18 +57,10 @@ vi.mock('../infra/config/index.js', () => ({
})),
}));
vi.mock('../features/tasks/execute/taskExecution.js', () => ({
executeTask: (...args: unknown[]) => mockExecuteTask(...args),
}));
vi.mock('../features/tasks/list/instructMode.js', () => ({
runInstructMode: (...args: unknown[]) => mockRunInstructMode(...args),
}));
vi.mock('../features/tasks/add/index.js', () => ({
saveTaskFile: vi.fn(),
}));
vi.mock('../features/pieceSelection/index.js', () => ({
selectPiece: (...args: unknown[]) => mockSelectPiece(...args),
}));
@ -74,9 +69,27 @@ vi.mock('../features/interactive/actionDispatcher.js', () => ({
dispatchConversationAction: (...args: unknown[]) => mockDispatchConversationAction(...args),
}));
vi.mock('../shared/prompt/index.js', () => ({
confirm: (...args: unknown[]) => mockConfirm(...args),
}));
vi.mock('../shared/i18n/index.js', () => ({
getLabel: (...args: unknown[]) => mockGetLabel(...args),
}));
vi.mock('../features/interactive/index.js', () => ({
resolveLanguage: (...args: unknown[]) => mockResolveLanguage(...args),
listRecentRuns: (...args: unknown[]) => mockListRecentRuns(...args),
selectRun: (...args: unknown[]) => mockSelectRun(...args),
loadRunSessionContext: (...args: unknown[]) => mockLoadRunSessionContext(...args),
}));
vi.mock('../features/tasks/execute/taskExecution.js', () => ({
executeAndCompleteTask: (...args: unknown[]) => mockExecuteAndCompleteTask(...args),
}));
vi.mock('../shared/ui/index.js', () => ({
info: vi.fn(),
success: vi.fn(),
error: vi.fn(),
}));
@ -90,18 +103,32 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
}));
import { instructBranch } from '../features/tasks/list/taskActions.js';
import { error as logError } from '../shared/ui/index.js';
describe('instructBranch execute flow', () => {
const mockLogError = vi.mocked(logError);
describe('instructBranch direct execution flow', () => {
beforeEach(() => {
vi.clearAllMocks();
mockExistsSync.mockReturnValue(true);
mockSelectPiece.mockResolvedValue('default');
mockRunInstructMode.mockResolvedValue({ type: 'execute', task: '追加して' });
mockDispatchConversationAction.mockImplementation(async (_result, handlers) => handlers.execute({ task: '追加して' }));
mockRunInstructMode.mockResolvedValue({ action: 'execute', task: '追加指示A' });
mockDispatchConversationAction.mockImplementation(async (_result, handlers) => handlers.execute({ task: '追加指示A' }));
mockConfirm.mockResolvedValue(true);
mockGetLabel.mockReturnValue("Reference a previous run's results?");
mockResolveLanguage.mockReturnValue('en');
mockListRecentRuns.mockReturnValue([]);
mockSelectRun.mockResolvedValue(null);
mockStartReExecution.mockReturnValue({
name: 'done-task',
content: 'done',
data: { task: 'done' },
});
mockExecuteAndCompleteTask.mockResolvedValue(true);
});
it('should record addTask and completeTask on success', async () => {
mockExecuteTask.mockResolvedValue(true);
it('should execute directly via startReExecution instead of requeuing', async () => {
const result = await instructBranch('/project', {
kind: 'completed',
name: 'done-task',
@ -110,16 +137,101 @@ describe('instructBranch execute flow', () => {
content: 'done',
branch: 'takt/done-task',
worktreePath: '/project/.takt/worktrees/done-task',
data: { task: 'done', retry_note: '既存ノート' },
});
expect(result).toBe(true);
expect(mockAddTask).toHaveBeenCalledTimes(1);
expect(mockCompleteTask).toHaveBeenCalledTimes(1);
expect(mockFailTask).not.toHaveBeenCalled();
expect(mockStartReExecution).toHaveBeenCalledWith(
'done-task',
['completed', 'failed'],
undefined,
'既存ノート\n\n追加指示A',
);
expect(mockExecuteAndCompleteTask).toHaveBeenCalled();
});
it('should record addTask and failTask on failure', async () => {
mockExecuteTask.mockResolvedValue(false);
it('should set generated instruction as retry note when no existing note', async () => {
await instructBranch('/project', {
kind: 'completed',
name: 'done-task',
createdAt: '2026-02-14T00:00:00.000Z',
filePath: '/project/.takt/tasks.yaml',
content: 'done',
branch: 'takt/done-task',
worktreePath: '/project/.takt/worktrees/done-task',
data: { task: 'done' },
});
expect(mockStartReExecution).toHaveBeenCalledWith(
'done-task',
['completed', 'failed'],
undefined,
'追加指示A',
);
});
it('should run instruct mode in existing worktree', async () => {
await instructBranch('/project', {
kind: 'completed',
name: 'done-task',
createdAt: '2026-02-14T00:00:00.000Z',
filePath: '/project/.takt/tasks.yaml',
content: 'done',
branch: 'takt/done-task',
worktreePath: '/project/.takt/worktrees/done-task',
data: { task: 'done' },
});
expect(mockRunInstructMode).toHaveBeenCalledWith(
'/project/.takt/worktrees/done-task',
expect.any(String),
'takt/done-task',
'done-task',
'done',
'',
expect.anything(),
undefined,
);
});
it('should search runs in worktree for run session context', async () => {
mockListRecentRuns.mockReturnValue([
{ slug: 'run-1', task: 'fix', piece: 'default', status: 'completed', startTime: '2026-02-18T00:00:00Z' },
]);
mockSelectRun.mockResolvedValue('run-1');
const runContext = { task: 'fix', piece: 'default', status: 'completed', movementLogs: [], reports: [] };
mockLoadRunSessionContext.mockReturnValue(runContext);
await instructBranch('/project', {
kind: 'completed',
name: 'done-task',
createdAt: '2026-02-14T00:00:00.000Z',
filePath: '/project/.takt/tasks.yaml',
content: 'done',
branch: 'takt/done-task',
worktreePath: '/project/.takt/worktrees/done-task',
data: { task: 'done' },
});
expect(mockConfirm).toHaveBeenCalledWith("Reference a previous run's results?", false);
// selectRunSessionContext uses worktreePath for run data
expect(mockListRecentRuns).toHaveBeenCalledWith('/project/.takt/worktrees/done-task');
expect(mockSelectRun).toHaveBeenCalledWith('/project/.takt/worktrees/done-task', 'en');
expect(mockLoadRunSessionContext).toHaveBeenCalledWith('/project/.takt/worktrees/done-task', 'run-1');
expect(mockRunInstructMode).toHaveBeenCalledWith(
'/project/.takt/worktrees/done-task',
expect.any(String),
'takt/done-task',
'done-task',
'done',
'',
expect.anything(),
runContext,
);
});
it('should return false when worktree does not exist', async () => {
mockExistsSync.mockReturnValue(false);
const result = await instructBranch('/project', {
kind: 'completed',
@ -129,18 +241,18 @@ describe('instructBranch execute flow', () => {
content: 'done',
branch: 'takt/done-task',
worktreePath: '/project/.takt/worktrees/done-task',
data: { task: 'done' },
});
expect(result).toBe(false);
expect(mockAddTask).toHaveBeenCalledTimes(1);
expect(mockFailTask).toHaveBeenCalledTimes(1);
expect(mockCompleteTask).not.toHaveBeenCalled();
expect(mockLogError).toHaveBeenCalledWith('Worktree directory does not exist for task: done-task');
expect(mockStartReExecution).not.toHaveBeenCalled();
});
it('should record failTask when executeTask throws', async () => {
mockExecuteTask.mockRejectedValue(new Error('crashed'));
it('should requeue task via requeueTask when save_task action', async () => {
mockDispatchConversationAction.mockImplementation(async (_result, handlers) => handlers.save_task({ task: '追加指示A' }));
await expect(instructBranch('/project', {
const result = await instructBranch('/project', {
kind: 'completed',
name: 'done-task',
createdAt: '2026-02-14T00:00:00.000Z',
@ -148,10 +260,30 @@ describe('instructBranch execute flow', () => {
content: 'done',
branch: 'takt/done-task',
worktreePath: '/project/.takt/worktrees/done-task',
})).rejects.toThrow('crashed');
data: { task: 'done' },
});
expect(mockAddTask).toHaveBeenCalledTimes(1);
expect(mockFailTask).toHaveBeenCalledTimes(1);
expect(mockCompleteTask).not.toHaveBeenCalled();
expect(result).toBe(true);
expect(mockRequeueTask).toHaveBeenCalledWith('done-task', ['completed', 'failed'], undefined, '追加指示A');
expect(mockStartReExecution).not.toHaveBeenCalled();
expect(mockExecuteAndCompleteTask).not.toHaveBeenCalled();
});
it('should requeue task with existing retry note appended when save_task', async () => {
mockDispatchConversationAction.mockImplementation(async (_result, handlers) => handlers.save_task({ task: '追加指示A' }));
const result = await instructBranch('/project', {
kind: 'completed',
name: 'done-task',
createdAt: '2026-02-14T00:00:00.000Z',
filePath: '/project/.takt/tasks.yaml',
content: 'done',
branch: 'takt/done-task',
worktreePath: '/project/.takt/worktrees/done-task',
data: { task: 'done', retry_note: '既存ノート' },
});
expect(result).toBe(true);
expect(mockRequeueTask).toHaveBeenCalledWith('done-task', ['completed', 'failed'], undefined, '既存ノート\n\n追加指示A');
});
});

View File

@ -1,17 +1,50 @@
import * as fs from 'node:fs';
import * as path from 'node:path';
import * as os from 'node:os';
import { stringify as stringifyYaml } from 'yaml';
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { describe, it, expect, vi, beforeEach } from 'vitest';
const {
mockExistsSync,
mockSelectPiece,
mockSelectOption,
mockLoadGlobalConfig,
mockLoadPieceByIdentifier,
mockGetPieceDescription,
mockRunRetryMode,
mockFindRunForTask,
mockStartReExecution,
mockRequeueTask,
mockExecuteAndCompleteTask,
} = vi.hoisted(() => ({
mockExistsSync: vi.fn(() => true),
mockSelectPiece: vi.fn(),
mockSelectOption: vi.fn(),
mockLoadGlobalConfig: vi.fn(),
mockLoadPieceByIdentifier: vi.fn(),
mockGetPieceDescription: vi.fn(() => ({
name: 'default',
description: 'desc',
pieceStructure: '',
movementPreviews: [],
})),
mockRunRetryMode: vi.fn(),
mockFindRunForTask: vi.fn(() => null),
mockStartReExecution: vi.fn(),
mockRequeueTask: vi.fn(),
mockExecuteAndCompleteTask: vi.fn(),
}));
vi.mock('node:fs', async (importOriginal) => ({
...(await importOriginal<Record<string, unknown>>()),
existsSync: (...args: unknown[]) => mockExistsSync(...args),
}));
vi.mock('../features/pieceSelection/index.js', () => ({
selectPiece: (...args: unknown[]) => mockSelectPiece(...args),
}));
vi.mock('../shared/prompt/index.js', () => ({
selectOption: vi.fn(),
promptInput: vi.fn(),
selectOption: (...args: unknown[]) => mockSelectOption(...args),
}));
vi.mock('../shared/ui/index.js', () => ({
success: vi.fn(),
error: vi.fn(),
info: vi.fn(),
header: vi.fn(),
blankLine: vi.fn(),
@ -27,26 +60,40 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
}));
vi.mock('../infra/config/index.js', () => ({
loadGlobalConfig: vi.fn(),
loadPieceByIdentifier: vi.fn(),
loadGlobalConfig: (...args: unknown[]) => mockLoadGlobalConfig(...args),
loadPieceByIdentifier: (...args: unknown[]) => mockLoadPieceByIdentifier(...args),
getPieceDescription: (...args: unknown[]) => mockGetPieceDescription(...args),
}));
vi.mock('../features/interactive/index.js', () => ({
findRunForTask: (...args: unknown[]) => mockFindRunForTask(...args),
loadRunSessionContext: vi.fn(),
getRunPaths: vi.fn(() => ({ logsDir: '/tmp/logs', reportsDir: '/tmp/reports' })),
formatRunSessionForPrompt: vi.fn(() => ({
runTask: '', runPiece: '', runStatus: '', runMovementLogs: '', runReports: '',
})),
runRetryMode: (...args: unknown[]) => mockRunRetryMode(...args),
}));
vi.mock('../infra/task/index.js', () => ({
TaskRunner: class {
startReExecution(...args: unknown[]) {
return mockStartReExecution(...args);
}
requeueTask(...args: unknown[]) {
return mockRequeueTask(...args);
}
},
}));
vi.mock('../features/tasks/execute/taskExecution.js', () => ({
executeAndCompleteTask: (...args: unknown[]) => mockExecuteAndCompleteTask(...args),
}));
import { selectOption, promptInput } from '../shared/prompt/index.js';
import { success, error as logError } from '../shared/ui/index.js';
import { loadGlobalConfig, loadPieceByIdentifier } from '../infra/config/index.js';
import { retryFailedTask } from '../features/tasks/list/taskRetryActions.js';
import type { TaskListItem } from '../infra/task/types.js';
import type { PieceConfig } from '../core/models/index.js';
const mockSelectOption = vi.mocked(selectOption);
const mockPromptInput = vi.mocked(promptInput);
const mockSuccess = vi.mocked(success);
const mockLogError = vi.mocked(logError);
const mockLoadGlobalConfig = vi.mocked(loadGlobalConfig);
const mockLoadPieceByIdentifier = vi.mocked(loadPieceByIdentifier);
let tmpDir: string;
const defaultPieceConfig: PieceConfig = {
name: 'default',
description: 'Default piece',
@ -59,92 +106,142 @@ const defaultPieceConfig: PieceConfig = {
],
};
function writeFailedTask(projectDir: string, name: string): TaskListItem {
const tasksFile = path.join(projectDir, '.takt', 'tasks.yaml');
fs.mkdirSync(path.dirname(tasksFile), { recursive: true });
fs.writeFileSync(tasksFile, stringifyYaml({
tasks: [
{
name,
status: 'failed',
content: 'Do something',
created_at: '2025-01-15T12:00:00.000Z',
started_at: '2025-01-15T12:01:00.000Z',
completed_at: '2025-01-15T12:02:00.000Z',
piece: 'default',
failure: {
movement: 'review',
error: 'Boom',
},
},
],
}), 'utf-8');
function makeFailedTask(overrides?: Partial<TaskListItem>): TaskListItem {
return {
kind: 'failed',
name,
name: 'my-task',
createdAt: '2025-01-15T12:02:00.000Z',
filePath: tasksFile,
filePath: '/project/.takt/tasks.yaml',
content: 'Do something',
branch: 'takt/my-task',
worktreePath: '/project/.takt/worktrees/my-task',
data: { task: 'Do something', piece: 'default' },
failure: { movement: 'review', error: 'Boom' },
...overrides,
};
}
beforeEach(() => {
vi.clearAllMocks();
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'takt-test-retry-'));
});
afterEach(() => {
fs.rmSync(tmpDir, { recursive: true, force: true });
});
describe('retryFailedTask', () => {
it('should requeue task with selected movement', async () => {
const task = writeFailedTask(tmpDir, 'my-task');
mockLoadGlobalConfig.mockReturnValue({ defaultPiece: 'default' });
mockLoadPieceByIdentifier.mockReturnValue(defaultPieceConfig);
mockSelectOption.mockResolvedValue('implement');
mockPromptInput.mockResolvedValue('');
const result = await retryFailedTask(task, tmpDir);
expect(result).toBe(true);
expect(mockSuccess).toHaveBeenCalledWith('Task requeued: my-task');
const tasksYaml = fs.readFileSync(path.join(tmpDir, '.takt', 'tasks.yaml'), 'utf-8');
expect(tasksYaml).toContain('status: pending');
expect(tasksYaml).toContain('start_movement: implement');
});
it('should not add start_movement when initial movement is selected', async () => {
const task = writeFailedTask(tmpDir, 'my-task');
mockExistsSync.mockReturnValue(true);
mockSelectPiece.mockResolvedValue('default');
mockLoadGlobalConfig.mockReturnValue({ defaultPiece: 'default' });
mockLoadPieceByIdentifier.mockReturnValue(defaultPieceConfig);
mockSelectOption.mockResolvedValue('plan');
mockPromptInput.mockResolvedValue('');
mockRunRetryMode.mockResolvedValue({ action: 'execute', task: '追加指示A' });
mockStartReExecution.mockReturnValue({
name: 'my-task',
content: 'Do something',
data: { task: 'Do something', piece: 'default' },
});
mockExecuteAndCompleteTask.mockResolvedValue(true);
});
const result = await retryFailedTask(task, tmpDir);
describe('retryFailedTask', () => {
it('should run retry mode in existing worktree and execute directly', async () => {
const task = makeFailedTask();
const result = await retryFailedTask(task, '/project');
expect(result).toBe(true);
const tasksYaml = fs.readFileSync(path.join(tmpDir, '.takt', 'tasks.yaml'), 'utf-8');
expect(tasksYaml).not.toContain('start_movement');
expect(mockSelectPiece).toHaveBeenCalledWith('/project');
expect(mockRunRetryMode).toHaveBeenCalledWith(
'/project/.takt/worktrees/my-task',
expect.objectContaining({
failure: expect.objectContaining({ taskName: 'my-task', taskContent: 'Do something' }),
}),
);
expect(mockStartReExecution).toHaveBeenCalledWith('my-task', ['failed'], undefined, '追加指示A');
expect(mockExecuteAndCompleteTask).toHaveBeenCalled();
});
it('should return false and show error when piece not found', async () => {
const task = writeFailedTask(tmpDir, 'my-task');
it('should pass non-initial movement as startMovement', async () => {
const task = makeFailedTask();
mockSelectOption.mockResolvedValue('implement');
mockLoadGlobalConfig.mockReturnValue({ defaultPiece: 'default' });
mockLoadPieceByIdentifier.mockReturnValue(null);
await retryFailedTask(task, '/project');
const result = await retryFailedTask(task, tmpDir);
expect(mockStartReExecution).toHaveBeenCalledWith('my-task', ['failed'], 'implement', '追加指示A');
});
expect(result).toBe(false);
expect(mockLogError).toHaveBeenCalledWith(
'Piece "default" not found. Cannot determine available movements.',
it('should not pass startMovement when initial movement is selected', async () => {
const task = makeFailedTask();
await retryFailedTask(task, '/project');
expect(mockStartReExecution).toHaveBeenCalledWith('my-task', ['failed'], undefined, '追加指示A');
});
it('should append instruction to existing retry note', async () => {
const task = makeFailedTask({ data: { task: 'Do something', piece: 'default', retry_note: '既存ノート' } });
await retryFailedTask(task, '/project');
expect(mockStartReExecution).toHaveBeenCalledWith(
'my-task', ['failed'], undefined, '既存ノート\n\n追加指示A',
);
});
it('should search runs in worktree, not projectDir', async () => {
const task = makeFailedTask();
await retryFailedTask(task, '/project');
expect(mockFindRunForTask).toHaveBeenCalledWith('/project/.takt/worktrees/my-task', 'Do something');
});
it('should throw when worktree path is not set', async () => {
const task = makeFailedTask({ worktreePath: undefined });
await expect(retryFailedTask(task, '/project')).rejects.toThrow('Worktree path is not set');
});
it('should throw when worktree directory does not exist', async () => {
mockExistsSync.mockReturnValue(false);
const task = makeFailedTask();
await expect(retryFailedTask(task, '/project')).rejects.toThrow('Worktree directory does not exist');
});
it('should return false when piece selection is cancelled', async () => {
const task = makeFailedTask();
mockSelectPiece.mockResolvedValue(null);
const result = await retryFailedTask(task, '/project');
expect(result).toBe(false);
expect(mockLoadPieceByIdentifier).not.toHaveBeenCalled();
});
it('should return false when retry mode is cancelled', async () => {
const task = makeFailedTask();
mockRunRetryMode.mockResolvedValue({ action: 'cancel', task: '' });
const result = await retryFailedTask(task, '/project');
expect(result).toBe(false);
expect(mockStartReExecution).not.toHaveBeenCalled();
});
it('should requeue task via requeueTask when save_task action', async () => {
const task = makeFailedTask();
mockRunRetryMode.mockResolvedValue({ action: 'save_task', task: '追加指示A' });
const result = await retryFailedTask(task, '/project');
expect(result).toBe(true);
expect(mockRequeueTask).toHaveBeenCalledWith('my-task', ['failed'], undefined, '追加指示A');
expect(mockStartReExecution).not.toHaveBeenCalled();
expect(mockExecuteAndCompleteTask).not.toHaveBeenCalled();
});
it('should requeue task with existing retry note appended when save_task', async () => {
const task = makeFailedTask({ data: { task: 'Do something', piece: 'default', retry_note: '既存ノート' } });
mockRunRetryMode.mockResolvedValue({ action: 'save_task', task: '追加指示A' });
await retryFailedTask(task, '/project');
expect(mockRequeueTask).toHaveBeenCalledWith('my-task', ['failed'], undefined, '既存ノート\n\n追加指示A');
});
});

View File

@ -12,7 +12,6 @@ import {
initGlobalDirs,
initProjectDirs,
loadGlobalConfig,
getEffectiveDebugConfig,
isVerboseMode,
} from '../../infra/config/index.js';
import { setQuietMode } from '../../shared/context.js';
@ -68,13 +67,7 @@ export async function runPreActionHook(): Promise<void> {
initProjectDirs(resolvedCwd);
const verbose = isVerboseMode(resolvedCwd);
let debugConfig = getEffectiveDebugConfig(resolvedCwd);
if (verbose && (!debugConfig || !debugConfig.enabled)) {
debugConfig = { enabled: true };
}
initDebugLogger(debugConfig, resolvedCwd);
initDebugLogger(verbose ? { enabled: true } : undefined, resolvedCwd);
const config = loadGlobalConfig();

View File

@ -5,7 +5,7 @@
* pipeline mode, or interactive mode.
*/
import { info, error, withProgress } from '../../shared/ui/index.js';
import { info, error as logError, withProgress } from '../../shared/ui/index.js';
import { confirm } from '../../shared/prompt/index.js';
import { getErrorMessage } from '../../shared/utils/index.js';
import { getLabel } from '../../shared/i18n/index.js';
@ -20,13 +20,14 @@ import {
quietMode,
personaMode,
resolveLanguage,
dispatchConversationAction,
type InteractiveModeResult,
} from '../../features/interactive/index.js';
import { dispatchConversationAction } from '../../features/interactive/actionDispatcher.js';
import { getPieceDescription, loadGlobalConfig } from '../../infra/config/index.js';
import { DEFAULT_PIECE_NAME } from '../../shared/constants.js';
import { program, resolvedCwd, pipelineMode } from './program.js';
import { resolveAgentOverrides, parseCreateWorktreeOption, isDirectTask } from './helpers.js';
import { loadTaskHistory } from './taskHistory.js';
/**
* Resolve issue references from CLI input.
@ -131,7 +132,7 @@ export async function executeDefaultAction(task?: string): Promise<void> {
initialInput = issueResult.initialInput;
}
} catch (e) {
error(getErrorMessage(e));
logError(getErrorMessage(e));
process.exit(1);
}
@ -160,6 +161,7 @@ export async function executeDefaultAction(task?: string): Promise<void> {
description: pieceDesc.description,
pieceStructure: pieceDesc.pieceStructure,
movementPreviews: pieceDesc.movementPreviews,
taskHistory: loadTaskHistory(resolvedCwd, lang),
};
let result: InteractiveModeResult;

View File

@ -0,0 +1,55 @@
import { isStaleRunningTask, TaskRunner } from '../../infra/task/index.js';
import {
type TaskHistorySummaryItem,
normalizeTaskHistorySummary,
} from '../../features/interactive/index.js';
import { getErrorMessage } from '../../shared/utils/index.js';
import { error as logError } from '../../shared/ui/index.js';
/**
* Load and normalize task history for interactive context.
*/
function toTaskHistoryItems(cwd: string): TaskHistorySummaryItem[] {
const runner = new TaskRunner(cwd);
const tasks = runner.listAllTaskItems();
const historyItems: TaskHistorySummaryItem[] = [];
for (const task of tasks) {
if (task.kind === 'failed' || task.kind === 'completed') {
historyItems.push({
worktreeId: task.worktreePath ?? task.name,
status: task.kind,
startedAt: task.startedAt ?? '',
completedAt: task.completedAt ?? '',
finalResult: task.kind,
failureSummary: task.failure?.error,
logKey: task.branch ?? task.worktreePath ?? task.name,
});
continue;
}
if (task.kind === 'running' && isStaleRunningTask(task.ownerPid)) {
historyItems.push({
worktreeId: task.worktreePath ?? task.name,
status: 'interrupted',
startedAt: task.startedAt ?? '',
completedAt: task.completedAt ?? '',
finalResult: 'interrupted',
failureSummary: undefined,
logKey: task.branch ?? task.worktreePath ?? task.name,
});
}
}
return historyItems;
}
export function loadTaskHistory(cwd: string, lang: 'en' | 'ja'): TaskHistorySummaryItem[] {
try {
return normalizeTaskHistorySummary(toTaskHistoryItems(cwd), lang);
} catch (err) {
logError(getErrorMessage(err));
return [];
}
}

View File

@ -17,12 +17,6 @@ export interface CustomAgentConfig {
model?: string;
}
/** Debug configuration for takt */
export interface DebugConfig {
enabled: boolean;
logFile?: string;
}
/** Observability configuration for runtime event logs */
export interface ObservabilityConfig {
/** Enable provider stream event logging (default: false when undefined) */
@ -63,7 +57,6 @@ export interface GlobalConfig {
logLevel: 'debug' | 'info' | 'warn' | 'error';
provider?: 'claude' | 'codex' | 'opencode' | 'mock';
model?: string;
debug?: DebugConfig;
observability?: ObservabilityConfig;
/** Directory for shared clones (worktree_dir in config). If empty, uses ../{clone-name} relative to project */
worktreeDir?: string;

View File

@ -27,7 +27,6 @@ export type {
PieceConfig,
PieceState,
CustomAgentConfig,
DebugConfig,
ObservabilityConfig,
Language,
PipelineConfig,

View File

@ -374,12 +374,6 @@ export const CustomAgentConfigSchema = z.object({
{ message: 'Agent must have prompt_file, prompt, claude_agent, or claude_skill' }
);
/** Debug config schema */
export const DebugConfigSchema = z.object({
enabled: z.boolean().optional().default(false),
log_file: z.string().optional(),
});
export const ObservabilityConfigSchema = z.object({
provider_events: z.boolean().optional(),
});
@ -415,7 +409,6 @@ export const GlobalConfigSchema = z.object({
log_level: z.enum(['debug', 'info', 'warn', 'error']).optional().default('info'),
provider: z.enum(['claude', 'codex', 'opencode', 'mock']).optional().default('claude'),
model: z.string().optional(),
debug: DebugConfigSchema.optional(),
observability: ObservabilityConfigSchema.optional(),
/** Directory for shared clones (worktree_dir in config). If empty, uses ../{clone-name} relative to project */
worktree_dir: z.string().optional(),

View File

@ -62,7 +62,6 @@ export type {
// Configuration types (global and project)
export type {
CustomAgentConfig,
DebugConfig,
ObservabilityConfig,
Language,
PipelineConfig,

View File

@ -9,7 +9,9 @@ export {
selectPostSummaryAction,
formatMovementPreviews,
formatSessionStatus,
normalizeTaskHistorySummary,
type PieceContext,
type TaskHistorySummaryItem,
type InteractiveModeResult,
type InteractiveModeAction,
} from './interactive.js';
@ -19,3 +21,7 @@ export { selectRecentSession } from './sessionSelector.js';
export { passthroughMode } from './passthroughMode.js';
export { quietMode } from './quietMode.js';
export { personaMode } from './personaMode.js';
export { selectRun } from './runSelector.js';
export { listRecentRuns, findRunForTask, loadRunSessionContext, formatRunSessionForPrompt, getRunPaths, type RunSessionContext, type RunPaths } from './runSessionReader.js';
export { runRetryMode, buildRetryTemplateVars, type RetryContext, type RetryFailureInfo, type RetryRunInfo } from './retryMode.js';
export { dispatchConversationAction, type ConversationActionResult } from './actionDispatcher.js';

View File

@ -0,0 +1,263 @@
/**
* Interactive summary helpers.
*/
import { loadTemplate } from '../../shared/prompts/index.js';
import { type MovementPreview } from '../../infra/config/index.js';
import { selectOption } from '../../shared/prompt/index.js';
import { blankLine, info } from '../../shared/ui/index.js';
type TaskHistoryLocale = 'en' | 'ja';
export interface ConversationMessage {
role: 'user' | 'assistant';
content: string;
}
export interface TaskHistorySummaryItem {
worktreeId: string;
status: 'completed' | 'failed' | 'interrupted';
startedAt: string;
completedAt: string;
finalResult: string;
failureSummary: string | undefined;
logKey: string;
}
export function formatMovementPreviews(previews: MovementPreview[], lang: TaskHistoryLocale): string {
return previews.map((p, i) => {
const toolsStr = p.allowedTools.length > 0
? p.allowedTools.join(', ')
: (lang === 'ja' ? 'なし' : 'None');
const editStr = p.canEdit
? (lang === 'ja' ? '可' : 'Yes')
: (lang === 'ja' ? '不可' : 'No');
const personaLabel = lang === 'ja' ? 'ペルソナ' : 'Persona';
const instructionLabel = lang === 'ja' ? 'インストラクション' : 'Instruction';
const toolsLabel = lang === 'ja' ? 'ツール' : 'Tools';
const editLabel = lang === 'ja' ? '編集' : 'Edit';
const lines = [
`### ${i + 1}. ${p.name} (${p.personaDisplayName})`,
];
if (p.personaContent) {
lines.push(`**${personaLabel}:**`, p.personaContent);
}
if (p.instructionContent) {
lines.push(`**${instructionLabel}:**`, p.instructionContent);
}
lines.push(`**${toolsLabel}:** ${toolsStr}`, `**${editLabel}:** ${editStr}`);
return lines.join('\n');
}).join('\n\n');
}
function normalizeDateTime(value: string): string {
return value.trim() === '' ? 'N/A' : value;
}
function normalizeTaskStatus(status: TaskHistorySummaryItem['status'], lang: TaskHistoryLocale): string {
return status === 'completed'
? (lang === 'ja' ? '完了' : 'completed')
: status === 'failed'
? (lang === 'ja' ? '失敗' : 'failed')
: (lang === 'ja' ? '中断' : 'interrupted');
}
export function normalizeTaskHistorySummary(
items: TaskHistorySummaryItem[],
lang: TaskHistoryLocale,
): TaskHistorySummaryItem[] {
return items.map((task) => ({
...task,
startedAt: normalizeDateTime(task.startedAt),
completedAt: normalizeDateTime(task.completedAt),
finalResult: normalizeTaskStatus(task.status, lang),
}));
}
function formatTaskHistoryItem(item: TaskHistorySummaryItem, lang: TaskHistoryLocale): string {
const statusLabel = normalizeTaskStatus(item.status, lang);
const failureSummaryLine = item.failureSummary
? `${lang === 'ja' ? ' - 失敗要約' : ' - Failure summary'}: ${item.failureSummary}\n`
: '';
const lines = [
`- ${lang === 'ja' ? '実行ID' : 'Worktree ID'}: ${item.worktreeId}`,
` - ${lang === 'ja' ? 'ステータス' : 'Status'}: ${statusLabel}`,
` - ${lang === 'ja' ? '開始/終了' : 'Start/End'}: ${item.startedAt} / ${item.completedAt}`,
` - ${lang === 'ja' ? '最終結果' : 'Final result'}: ${item.finalResult}`,
` - ${lang === 'ja' ? 'ログ参照' : 'Log key'}: ${item.logKey}`,
failureSummaryLine,
];
return lines.join('\n').replace(/\n+$/, '');
}
export function formatTaskHistorySummary(taskHistory: TaskHistorySummaryItem[], lang: TaskHistoryLocale): string {
if (taskHistory.length === 0) {
return '';
}
const normalizedTaskHistory = normalizeTaskHistorySummary(taskHistory, lang);
const heading = lang === 'ja'
? '## 実行履歴'
: '## Task execution history';
const details = normalizedTaskHistory.map((item) => formatTaskHistoryItem(item, lang)).join('\n\n');
return `${heading}\n${details}`;
}
function buildTaskFromHistory(history: ConversationMessage[]): string {
return history
.map((msg) => `${msg.role === 'user' ? 'User' : 'Assistant'}: ${msg.content}`)
.join('\n\n');
}
export interface PieceContext {
/** Piece name (e.g. "minimal") */
name: string;
/** Piece description */
description: string;
/** Piece structure (numbered list of movements) */
pieceStructure: string;
/** Movement previews (persona + instruction content for first N movements) */
movementPreviews?: MovementPreview[];
/** Recent task history for conversation context */
taskHistory?: TaskHistorySummaryItem[];
}
export function buildSummaryPrompt(
history: ConversationMessage[],
hasSession: boolean,
lang: 'en' | 'ja',
noTranscriptNote: string,
conversationLabel: string,
pieceContext?: PieceContext,
): string {
let conversation = '';
if (history.length > 0) {
const historyText = buildTaskFromHistory(history);
conversation = `${conversationLabel}\n${historyText}`;
} else if (hasSession) {
conversation = `${conversationLabel}\n${noTranscriptNote}`;
} else {
return '';
}
const hasPiece = !!pieceContext;
const hasPreview = !!pieceContext?.movementPreviews?.length;
const summaryMovementDetails = hasPreview
? `\n### ${lang === 'ja' ? '処理するエージェント' : 'Processing Agents'}\n${formatMovementPreviews(pieceContext!.movementPreviews!, lang)}`
: '';
const summaryTaskHistory = pieceContext?.taskHistory?.length
? formatTaskHistorySummary(pieceContext.taskHistory, lang)
: '';
return loadTemplate('score_summary_system_prompt', lang, {
pieceInfo: hasPiece,
pieceName: pieceContext?.name ?? '',
pieceDescription: pieceContext?.description ?? '',
movementDetails: summaryMovementDetails,
taskHistory: summaryTaskHistory,
conversation,
});
}
export type PostSummaryAction = InteractiveModeAction | 'continue';
export type SummaryActionValue = 'execute' | 'create_issue' | 'save_task' | 'continue';
export interface SummaryActionOption {
label: string;
value: SummaryActionValue;
}
export type SummaryActionLabels = {
execute: string;
createIssue?: string;
saveTask: string;
continue: string;
};
export const BASE_SUMMARY_ACTIONS: readonly SummaryActionValue[] = [
'execute',
'save_task',
'continue',
];
export type InteractiveModeAction = 'execute' | 'save_task' | 'create_issue' | 'cancel';
export interface InteractiveSummaryUIText {
actionPrompt: string;
actions: {
execute: string;
createIssue: string;
saveTask: string;
continue: string;
};
}
export function buildSummaryActionOptions(
labels: SummaryActionLabels,
append: readonly SummaryActionValue[] = [],
): SummaryActionOption[] {
const order = [...BASE_SUMMARY_ACTIONS, ...append];
const seen = new Set<SummaryActionValue>();
const options: SummaryActionOption[] = [];
for (const action of order) {
if (seen.has(action)) {
continue;
}
seen.add(action);
if (action === 'execute') {
options.push({ label: labels.execute, value: action });
continue;
}
if (action === 'create_issue') {
if (labels.createIssue) {
options.push({ label: labels.createIssue, value: action });
}
continue;
}
if (action === 'save_task') {
options.push({ label: labels.saveTask, value: action });
continue;
}
options.push({ label: labels.continue, value: action });
}
return options;
}
export function selectSummaryAction(
task: string,
proposedLabel: string,
actionPrompt: string,
options: SummaryActionOption[],
): Promise<PostSummaryAction | null> {
blankLine();
info(proposedLabel);
console.log(task);
return selectOption<PostSummaryAction>(actionPrompt, options);
}
export function selectPostSummaryAction(
task: string,
proposedLabel: string,
ui: InteractiveSummaryUIText,
): Promise<PostSummaryAction | null> {
return selectSummaryAction(
task,
proposedLabel,
ui.actionPrompt,
buildSummaryActionOptions(
{
execute: ui.actions.execute,
createIssue: ui.actions.createIssue,
saveTask: ui.actions.saveTask,
continue: ui.actions.continue,
},
['create_issue'],
),
);
}

View File

@ -13,17 +13,20 @@
import type { Language } from '../../core/models/index.js';
import {
type SessionState,
type MovementPreview,
} from '../../infra/config/index.js';
import { selectOption } from '../../shared/prompt/index.js';
import { info, blankLine } from '../../shared/ui/index.js';
import { loadTemplate } from '../../shared/prompts/index.js';
import { getLabel, getLabelObject } from '../../shared/i18n/index.js';
import { loadTemplate } from '../../shared/prompts/index.js';
import {
initializeSession,
displayAndClearSessionState,
runConversationLoop,
} from './conversationLoop.js';
import {
type PieceContext,
formatMovementPreviews,
type InteractiveModeAction,
} from './interactive-summary.js';
import { type RunSessionContext, formatRunSessionForPrompt } from './runSessionReader.js';
/** Shape of interactive UI text */
export interface InteractiveUIText {
@ -57,7 +60,7 @@ export function formatSessionStatus(state: SessionState, lang: 'en' | 'ja'): str
lines.push(
getLabel('interactive.previousTask.error', lang, {
error: state.errorMessage!,
})
}),
);
} else if (state.status === 'user_stopped') {
lines.push(getLabel('interactive.previousTask.userStopped', lang));
@ -67,7 +70,7 @@ export function formatSessionStatus(state: SessionState, lang: 'en' | 'ja'): str
lines.push(
getLabel('interactive.previousTask.piece', lang, {
pieceName: state.pieceName,
})
}),
);
// Timestamp
@ -75,7 +78,7 @@ export function formatSessionStatus(state: SessionState, lang: 'en' | 'ja'): str
lines.push(
getLabel('interactive.previousTask.timestamp', lang, {
timestamp,
})
}),
);
return lines.join('\n');
@ -85,197 +88,19 @@ export function resolveLanguage(lang?: Language): 'en' | 'ja' {
return lang === 'ja' ? 'ja' : 'en';
}
/**
* Format MovementPreview[] into a Markdown string for template injection.
* Each movement is rendered with its persona and instruction content.
*/
export function formatMovementPreviews(previews: MovementPreview[], lang: 'en' | 'ja'): string {
return previews.map((p, i) => {
const toolsStr = p.allowedTools.length > 0
? p.allowedTools.join(', ')
: (lang === 'ja' ? 'なし' : 'None');
const editStr = p.canEdit
? (lang === 'ja' ? '可' : 'Yes')
: (lang === 'ja' ? '不可' : 'No');
const personaLabel = lang === 'ja' ? 'ペルソナ' : 'Persona';
const instructionLabel = lang === 'ja' ? 'インストラクション' : 'Instruction';
const toolsLabel = lang === 'ja' ? 'ツール' : 'Tools';
const editLabel = lang === 'ja' ? '編集' : 'Edit';
const lines = [
`### ${i + 1}. ${p.name} (${p.personaDisplayName})`,
];
if (p.personaContent) {
lines.push(`**${personaLabel}:**`, p.personaContent);
}
if (p.instructionContent) {
lines.push(`**${instructionLabel}:**`, p.instructionContent);
}
lines.push(`**${toolsLabel}:** ${toolsStr}`, `**${editLabel}:** ${editStr}`);
return lines.join('\n');
}).join('\n\n');
}
export interface ConversationMessage {
role: 'user' | 'assistant';
content: string;
}
/**
* Build the final task description from conversation history for executeTask.
*/
function buildTaskFromHistory(history: ConversationMessage[]): string {
return history
.map((msg) => `${msg.role === 'user' ? 'User' : 'Assistant'}: ${msg.content}`)
.join('\n\n');
}
/** Default toolset for interactive mode */
export const DEFAULT_INTERACTIVE_TOOLS = ['Read', 'Glob', 'Grep', 'Bash', 'WebSearch', 'WebFetch'];
/**
* Build the summary prompt (used as both system prompt and user message).
* Renders the complete score_summary_system_prompt template with conversation data.
* Returns empty string if there is no conversation to summarize.
*/
export function buildSummaryPrompt(
history: ConversationMessage[],
hasSession: boolean,
lang: 'en' | 'ja',
noTranscriptNote: string,
conversationLabel: string,
pieceContext?: PieceContext,
): string {
let conversation = '';
if (history.length > 0) {
const historyText = buildTaskFromHistory(history);
conversation = `${conversationLabel}\n${historyText}`;
} else if (hasSession) {
conversation = `${conversationLabel}\n${noTranscriptNote}`;
} else {
return '';
}
const hasPiece = !!pieceContext;
const hasPreview = !!pieceContext?.movementPreviews?.length;
const summaryMovementDetails = hasPreview
? `\n### ${lang === 'ja' ? '処理するエージェント' : 'Processing Agents'}\n${formatMovementPreviews(pieceContext!.movementPreviews!, lang)}`
: '';
return loadTemplate('score_summary_system_prompt', lang, {
pieceInfo: hasPiece,
pieceName: pieceContext?.name ?? '',
pieceDescription: pieceContext?.description ?? '',
movementDetails: summaryMovementDetails,
conversation,
});
}
export type PostSummaryAction = InteractiveModeAction | 'continue';
export type SummaryActionValue = 'execute' | 'create_issue' | 'save_task' | 'continue';
export interface SummaryActionOption {
label: string;
value: SummaryActionValue;
}
export type SummaryActionLabels = {
execute: string;
createIssue?: string;
saveTask: string;
continue: string;
};
export const BASE_SUMMARY_ACTIONS: readonly SummaryActionValue[] = [
'execute',
'save_task',
'continue',
];
export function buildSummaryActionOptions(
labels: SummaryActionLabels,
append: readonly SummaryActionValue[] = [],
): SummaryActionOption[] {
const order = [...BASE_SUMMARY_ACTIONS, ...append];
const seen = new Set<SummaryActionValue>();
const options: SummaryActionOption[] = [];
for (const action of order) {
if (seen.has(action)) continue;
seen.add(action);
if (action === 'execute') {
options.push({ label: labels.execute, value: action });
continue;
}
if (action === 'create_issue') {
if (labels.createIssue) {
options.push({ label: labels.createIssue, value: action });
}
continue;
}
if (action === 'save_task') {
options.push({ label: labels.saveTask, value: action });
continue;
}
options.push({ label: labels.continue, value: action });
}
return options;
}
export async function selectSummaryAction(
task: string,
proposedLabel: string,
actionPrompt: string,
options: SummaryActionOption[],
): Promise<PostSummaryAction | null> {
blankLine();
info(proposedLabel);
console.log(task);
return selectOption<PostSummaryAction>(actionPrompt, options);
}
export async function selectPostSummaryAction(
task: string,
proposedLabel: string,
ui: InteractiveUIText,
): Promise<PostSummaryAction | null> {
return selectSummaryAction(
task,
proposedLabel,
ui.actionPrompt,
buildSummaryActionOptions(
{
execute: ui.actions.execute,
createIssue: ui.actions.createIssue,
saveTask: ui.actions.saveTask,
continue: ui.actions.continue,
},
['create_issue'],
),
);
}
export type InteractiveModeAction = 'execute' | 'save_task' | 'create_issue' | 'cancel';
export interface InteractiveModeResult {
/** The action selected by the user */
action: InteractiveModeAction;
/** The assembled task text (only meaningful when action is not 'cancel') */
task: string;
}
export interface PieceContext {
/** Piece name (e.g. "minimal") */
name: string;
/** Piece description */
description: string;
/** Piece structure (numbered list of movements) */
pieceStructure: string;
/** Movement previews (persona + instruction content for first N movements) */
movementPreviews?: MovementPreview[];
}
export const DEFAULT_INTERACTIVE_TOOLS = ['Read', 'Glob', 'Grep', 'Bash', 'WebSearch', 'WebFetch'];
export {
buildSummaryPrompt,
formatMovementPreviews,
type ConversationMessage,
type PieceContext,
type TaskHistorySummaryItem,
} from './interactive-summary.js';
/**
* Run the interactive task input mode.
@ -291,6 +116,7 @@ export async function interactiveMode(
initialInput?: string,
pieceContext?: PieceContext,
sessionId?: string,
runSessionContext?: RunSessionContext,
): Promise<InteractiveModeResult> {
const baseCtx = initializeSession(cwd, 'interactive');
const ctx = sessionId ? { ...baseCtx, sessionId } : baseCtx;
@ -298,10 +124,17 @@ export async function interactiveMode(
displayAndClearSessionState(cwd, ctx.lang);
const hasPreview = !!pieceContext?.movementPreviews?.length;
const hasRunSession = !!runSessionContext;
const runPromptVars = hasRunSession
? formatRunSessionForPrompt(runSessionContext)
: { runTask: '', runPiece: '', runStatus: '', runMovementLogs: '', runReports: '' };
const systemPrompt = loadTemplate('score_interactive_system_prompt', ctx.lang, {
hasPiecePreview: hasPreview,
pieceStructure: pieceContext?.pieceStructure ?? '',
movementDetails: hasPreview ? formatMovementPreviews(pieceContext!.movementPreviews!, ctx.lang) : '',
hasRunSession,
...runPromptVars,
});
const policyContent = loadTemplate('score_interactive_policy', ctx.lang, {});
const ui = getLabelObject<InteractiveUIText>('interactive.ui', ctx.lang);
@ -327,3 +160,25 @@ export async function interactiveMode(
introMessage: ui.intro,
}, pieceContext, initialInput);
}
export {
type InteractiveModeAction,
type InteractiveSummaryUIText,
type PostSummaryAction,
type SummaryActionLabels,
type SummaryActionOption,
type SummaryActionValue,
selectPostSummaryAction,
buildSummaryActionOptions,
selectSummaryAction,
formatTaskHistorySummary,
normalizeTaskHistorySummary,
BASE_SUMMARY_ACTIONS,
} from './interactive-summary.js';
export interface InteractiveModeResult {
/** The action selected by the user */
action: InteractiveModeAction;
/** The assembled task text (only meaningful when action is not 'cancel') */
task: string;
}

View File

@ -0,0 +1,167 @@
/**
* Retry mode for failed tasks.
*
* Provides a dedicated conversation loop with failure context,
* run session data, and piece structure injected into the system prompt.
*/
import {
initializeSession,
displayAndClearSessionState,
runConversationLoop,
type SessionContext,
type ConversationStrategy,
type PostSummaryAction,
} from './conversationLoop.js';
import {
buildSummaryActionOptions,
selectSummaryAction,
formatMovementPreviews,
type PieceContext,
} from './interactive-summary.js';
import { resolveLanguage } from './interactive.js';
import { loadTemplate } from '../../shared/prompts/index.js';
import { getLabelObject } from '../../shared/i18n/index.js';
import { loadGlobalConfig } from '../../infra/config/index.js';
import type { InstructModeResult, InstructUIText } from '../tasks/list/instructMode.js';
/** Failure information for a retry task */
export interface RetryFailureInfo {
readonly taskName: string;
readonly taskContent: string;
readonly createdAt: string;
readonly failedMovement: string;
readonly error: string;
readonly lastMessage: string;
readonly retryNote: string;
}
/** Run session reference data for retry prompt */
export interface RetryRunInfo {
readonly logsDir: string;
readonly reportsDir: string;
readonly task: string;
readonly piece: string;
readonly status: string;
readonly movementLogs: string;
readonly reports: string;
}
/** Full retry context assembled by the caller */
export interface RetryContext {
readonly failure: RetryFailureInfo;
readonly branchName: string;
readonly pieceContext: PieceContext;
readonly run: RetryRunInfo | null;
}
const RETRY_TOOLS = ['Read', 'Glob', 'Grep', 'Bash', 'WebSearch', 'WebFetch'];
/**
* Convert RetryContext into template variable map.
*/
export function buildRetryTemplateVars(ctx: RetryContext, lang: 'en' | 'ja'): Record<string, string | boolean> {
const hasPiecePreview = !!ctx.pieceContext.movementPreviews?.length;
const movementDetails = hasPiecePreview
? formatMovementPreviews(ctx.pieceContext.movementPreviews!, lang)
: '';
const hasRun = ctx.run !== null;
return {
taskName: ctx.failure.taskName,
taskContent: ctx.failure.taskContent,
branchName: ctx.branchName,
createdAt: ctx.failure.createdAt,
failedMovement: ctx.failure.failedMovement,
failureError: ctx.failure.error,
failureLastMessage: ctx.failure.lastMessage,
retryNote: ctx.failure.retryNote,
hasPiecePreview,
pieceStructure: ctx.pieceContext.pieceStructure,
movementDetails,
hasRun,
runLogsDir: hasRun ? ctx.run!.logsDir : '',
runReportsDir: hasRun ? ctx.run!.reportsDir : '',
runTask: hasRun ? ctx.run!.task : '',
runPiece: hasRun ? ctx.run!.piece : '',
runStatus: hasRun ? ctx.run!.status : '',
runMovementLogs: hasRun ? ctx.run!.movementLogs : '',
runReports: hasRun ? ctx.run!.reports : '',
};
}
function createSelectRetryAction(ui: InstructUIText): (task: string, lang: 'en' | 'ja') => Promise<PostSummaryAction | null> {
return async (task: string, _lang: 'en' | 'ja'): Promise<PostSummaryAction | null> => {
return selectSummaryAction(
task,
ui.proposed,
ui.actionPrompt,
buildSummaryActionOptions({
execute: ui.actions.execute,
saveTask: ui.actions.saveTask,
continue: ui.actions.continue,
}),
);
};
}
/**
* Run retry mode conversation loop.
*
* Uses a dedicated system prompt with failure context, run session data,
* and piece structure injected for the AI assistant.
*/
export async function runRetryMode(
cwd: string,
retryContext: RetryContext,
): Promise<InstructModeResult> {
const globalConfig = loadGlobalConfig();
const lang = resolveLanguage(globalConfig.language);
if (!globalConfig.provider) {
throw new Error('Provider is not configured.');
}
const baseCtx = initializeSession(cwd, 'retry');
const ctx: SessionContext = { ...baseCtx, lang, personaName: 'retry' };
displayAndClearSessionState(cwd, ctx.lang);
const ui = getLabelObject<InstructUIText>('instruct.ui', ctx.lang);
const templateVars = buildRetryTemplateVars(retryContext, lang);
const systemPrompt = loadTemplate('score_retry_system_prompt', ctx.lang, templateVars);
const introLabel = ctx.lang === 'ja'
? `## リトライ: ${retryContext.failure.taskName}\n\nブランチ: ${retryContext.branchName}\n\n${ui.intro}`
: `## Retry: ${retryContext.failure.taskName}\n\nBranch: ${retryContext.branchName}\n\n${ui.intro}`;
const policyContent = loadTemplate('score_interactive_policy', ctx.lang, {});
function injectPolicy(userMessage: string): string {
const policyIntro = ctx.lang === 'ja'
? '以下のポリシーは行動規範です。必ず遵守してください。'
: 'The following policy defines behavioral guidelines. Please follow them.';
const reminderLabel = ctx.lang === 'ja'
? '上記の Policy セクションで定義されたポリシー規範を遵守してください。'
: 'Please follow the policy guidelines defined in the Policy section above.';
return `## Policy\n${policyIntro}\n\n${policyContent}\n\n---\n\n${userMessage}\n\n---\n**Policy Reminder:** ${reminderLabel}`;
}
const strategy: ConversationStrategy = {
systemPrompt,
allowedTools: RETRY_TOOLS,
transformPrompt: injectPolicy,
introMessage: introLabel,
selectAction: createSelectRetryAction(ui),
};
const result = await runConversationLoop(cwd, ctx, strategy, retryContext.pieceContext, undefined);
if (result.action === 'cancel') {
return { action: 'cancel', task: '' };
}
return { action: result.action as InstructModeResult['action'], task: result.task };
}

View File

@ -0,0 +1,49 @@
/**
* Run selector for interactive mode
*
* Checks for recent runs and presents a selection UI
* using the same selectOption pattern as sessionSelector.
*/
import { selectOption, type SelectOptionItem } from '../../shared/prompt/index.js';
import { getLabel } from '../../shared/i18n/index.js';
import { info } from '../../shared/ui/index.js';
import { listRecentRuns, type RunSummary } from './runSessionReader.js';
import { truncateForLabel, formatDateForSelector } from './selectorUtils.js';
/** Maximum label length for run task display */
const MAX_TASK_LABEL_LENGTH = 60;
/**
* Prompt user to select a run from recent runs.
*
* @returns Selected run slug, or null if no runs or cancelled
*/
export async function selectRun(
cwd: string,
lang: 'en' | 'ja',
): Promise<string | null> {
const runs = listRecentRuns(cwd);
if (runs.length === 0) {
info(getLabel('interactive.runSelector.noRuns', lang));
return null;
}
const options: SelectOptionItem<string>[] = runs.map((run: RunSummary) => {
const label = truncateForLabel(run.task, MAX_TASK_LABEL_LENGTH);
const dateStr = formatDateForSelector(run.startTime, lang);
const description = `${dateStr} | ${run.piece} | ${run.status}`;
return {
label,
value: run.slug,
description,
};
});
const prompt = getLabel('interactive.runSelector.prompt', lang);
const selected = await selectOption<string>(prompt, options);
return selected;
}

View File

@ -0,0 +1,245 @@
/**
* Run session reader for interactive mode
*
* Scans .takt/runs/ for recent runs, loads NDJSON logs and reports,
* and formats them for injection into the interactive system prompt.
*/
import { existsSync, readdirSync, readFileSync } from 'node:fs';
import { join } from 'node:path';
import { loadNdjsonLog } from '../../infra/fs/index.js';
import type { SessionLog } from '../../shared/utils/index.js';
/** Maximum number of runs to return from listing */
const MAX_RUNS = 10;
/** Maximum character length for movement log content */
const MAX_CONTENT_LENGTH = 500;
/** Summary of a run for selection UI */
export interface RunSummary {
readonly slug: string;
readonly task: string;
readonly piece: string;
readonly status: string;
readonly startTime: string;
}
/** A single movement log entry for display */
interface MovementLogEntry {
readonly step: string;
readonly persona: string;
readonly status: string;
readonly content: string;
}
/** A report file entry */
interface ReportEntry {
readonly filename: string;
readonly content: string;
}
/** Full context loaded from a run for prompt injection */
export interface RunSessionContext {
readonly task: string;
readonly piece: string;
readonly status: string;
readonly movementLogs: readonly MovementLogEntry[];
readonly reports: readonly ReportEntry[];
}
/** Absolute paths to a run's logs and reports directories */
export interface RunPaths {
readonly logsDir: string;
readonly reportsDir: string;
}
interface MetaJson {
readonly task: string;
readonly piece: string;
readonly status: string;
readonly startTime: string;
readonly logsDirectory: string;
readonly reportDirectory: string;
readonly runSlug: string;
}
function truncateContent(content: string, maxLength: number): string {
if (content.length <= maxLength) {
return content;
}
return content.slice(0, maxLength) + '…';
}
function parseMetaJson(metaPath: string): MetaJson | null {
if (!existsSync(metaPath)) {
return null;
}
const raw = readFileSync(metaPath, 'utf-8').trim();
if (!raw) {
return null;
}
try {
return JSON.parse(raw) as MetaJson;
} catch {
return null;
}
}
function buildMovementLogs(sessionLog: SessionLog): MovementLogEntry[] {
return sessionLog.history.map((entry) => ({
step: entry.step,
persona: entry.persona,
status: entry.status,
content: truncateContent(entry.content, MAX_CONTENT_LENGTH),
}));
}
function loadReports(reportsDir: string): ReportEntry[] {
if (!existsSync(reportsDir)) {
return [];
}
const files = readdirSync(reportsDir).filter((f) => f.endsWith('.md')).sort();
return files.map((filename) => ({
filename,
content: readFileSync(join(reportsDir, filename), 'utf-8'),
}));
}
function findSessionLogFile(logsDir: string): string | null {
if (!existsSync(logsDir)) {
return null;
}
const files = readdirSync(logsDir).filter(
(f) => f.endsWith('.jsonl') && !f.includes('-provider-events'),
);
const first = files[0];
if (!first) {
return null;
}
return join(logsDir, first);
}
/**
* List recent runs sorted by startTime descending.
*/
export function listRecentRuns(cwd: string): RunSummary[] {
const runsDir = join(cwd, '.takt', 'runs');
if (!existsSync(runsDir)) {
return [];
}
const entries = readdirSync(runsDir, { withFileTypes: true });
const summaries: RunSummary[] = [];
for (const entry of entries) {
if (!entry.isDirectory()) continue;
const metaPath = join(runsDir, entry.name, 'meta.json');
const meta = parseMetaJson(metaPath);
if (!meta) continue;
summaries.push({
slug: entry.name,
task: meta.task,
piece: meta.piece,
status: meta.status,
startTime: meta.startTime,
});
}
summaries.sort((a, b) => b.startTime.localeCompare(a.startTime));
return summaries.slice(0, MAX_RUNS);
}
/**
* Find the most recent run matching the given task content.
*
* @returns The run slug if found, null otherwise.
*/
export function findRunForTask(cwd: string, taskContent: string): string | null {
const runs = listRecentRuns(cwd);
const match = runs.find((r) => r.task === taskContent);
return match?.slug ?? null;
}
/**
* Get absolute paths to a run's logs and reports directories.
*/
export function getRunPaths(cwd: string, slug: string): RunPaths {
const metaPath = join(cwd, '.takt', 'runs', slug, 'meta.json');
const meta = parseMetaJson(metaPath);
if (!meta) {
throw new Error(`Run not found: ${slug}`);
}
return {
logsDir: join(cwd, meta.logsDirectory),
reportsDir: join(cwd, meta.reportDirectory),
};
}
/**
* Load full run session context for prompt injection.
*/
export function loadRunSessionContext(cwd: string, slug: string): RunSessionContext {
const metaPath = join(cwd, '.takt', 'runs', slug, 'meta.json');
const meta = parseMetaJson(metaPath);
if (!meta) {
throw new Error(`Run not found: ${slug}`);
}
const logsDir = join(cwd, meta.logsDirectory);
const logFile = findSessionLogFile(logsDir);
let movementLogs: MovementLogEntry[] = [];
if (logFile) {
const sessionLog = loadNdjsonLog(logFile);
if (sessionLog) {
movementLogs = buildMovementLogs(sessionLog);
}
}
const reportsDir = join(cwd, meta.reportDirectory);
const reports = loadReports(reportsDir);
return {
task: meta.task,
piece: meta.piece,
status: meta.status,
movementLogs,
reports,
};
}
/**
* Format run session context into a text block for the system prompt.
*/
export function formatRunSessionForPrompt(ctx: RunSessionContext): {
runTask: string;
runPiece: string;
runStatus: string;
runMovementLogs: string;
runReports: string;
} {
const logLines = ctx.movementLogs.map((log) => {
const header = `### ${log.step} (${log.persona}) — ${log.status}`;
return `${header}\n${log.content}`;
});
const reportLines = ctx.reports.map((report) => {
return `### ${report.filename}\n${report.content}`;
});
return {
runTask: ctx.task,
runPiece: ctx.piece,
runStatus: ctx.status,
runMovementLogs: logLines.join('\n\n'),
runReports: reportLines.join('\n\n'),
};
}

View File

@ -0,0 +1,27 @@
/**
* Shared utilities for selector UI components.
*/
/**
* Truncate text to a single line with a maximum length for display as a label.
*/
export function truncateForLabel(text: string, maxLength: number): string {
const singleLine = text.replace(/\n/g, ' ').trim();
if (singleLine.length <= maxLength) {
return singleLine;
}
return singleLine.slice(0, maxLength) + '…';
}
/**
* Format a date string for display in selector options.
*/
export function formatDateForSelector(dateStr: string, lang: 'en' | 'ja'): string {
const date = new Date(dateStr);
return date.toLocaleString(lang === 'ja' ? 'ja-JP' : 'en-US', {
month: 'short',
day: 'numeric',
hour: '2-digit',
minute: '2-digit',
});
}

View File

@ -9,6 +9,7 @@ import { loadSessionIndex, extractLastAssistantResponse } from '../../infra/clau
import { selectOption, type SelectOptionItem } from '../../shared/prompt/index.js';
import { getLabel } from '../../shared/i18n/index.js';
import { info } from '../../shared/ui/index.js';
import { truncateForLabel, formatDateForSelector } from './selectorUtils.js';
/** Maximum number of sessions to display */
const MAX_DISPLAY_SESSIONS = 10;
@ -16,30 +17,6 @@ const MAX_DISPLAY_SESSIONS = 10;
/** Maximum length for last response preview */
const MAX_RESPONSE_PREVIEW_LENGTH = 200;
/**
* Format a modified date for display.
*/
function formatModifiedDate(modified: string, lang: 'en' | 'ja'): string {
const date = new Date(modified);
return date.toLocaleString(lang === 'ja' ? 'ja-JP' : 'en-US', {
month: 'short',
day: 'numeric',
hour: '2-digit',
minute: '2-digit',
});
}
/**
* Truncate a single-line string for use as a label.
*/
function truncateForLabel(text: string, maxLength: number): string {
const singleLine = text.replace(/\n/g, ' ').trim();
if (singleLine.length <= maxLength) {
return singleLine;
}
return singleLine.slice(0, maxLength) + '…';
}
/**
* Prompt user to select from recent Claude Code sessions.
*
@ -70,7 +47,7 @@ export async function selectRecentSession(
for (const session of displaySessions) {
const label = truncateForLabel(session.firstPrompt, 60);
const dateStr = formatModifiedDate(session.modified, lang);
const dateStr = formatDateForSelector(session.modified, lang);
const messagesStr = getLabel('interactive.sessionSelector.messages', lang, {
count: String(session.messageCount),
});

View File

@ -125,11 +125,6 @@ export async function createIssueAndSaveTask(cwd: string, task: string, piece?:
}
async function promptWorktreeSettings(): Promise<WorktreeSettings> {
const useWorktree = await confirm('Create worktree?', true);
if (!useWorktree) {
return {};
}
const customPath = await promptInput('Worktree path (Enter for auto)');
const worktree: boolean | string = customPath || true;

View File

@ -96,6 +96,14 @@ export async function resolveTaskExecution(
if (data.worktree) {
throwIfAborted(abortSignal);
baseBranch = getCurrentBranch(defaultCwd);
if (task.worktreePath && fs.existsSync(task.worktreePath)) {
// Reuse existing worktree (clone still on disk from previous execution)
execCwd = task.worktreePath;
branch = data.branch;
worktreePath = task.worktreePath;
isWorktree = true;
} else {
const taskSlug = await withProgress(
'Generating branch name...',
(slug) => `Branch name generated: ${slug}`,
@ -119,6 +127,7 @@ export async function resolveTaskExecution(
worktreePath = result.path;
isWorktree = true;
}
}
if (task.taskDir && reportDirName) {
taskPrompt = stageTaskSpecForExecution(defaultCwd, execCwd, task.taskDir, reportDirName);

View File

@ -181,7 +181,7 @@ export async function listTasks(
showFullDiff(cwd, task.branch);
break;
case 'instruct':
await instructBranch(cwd, task, options);
await instructBranch(cwd, task);
break;
case 'try':
tryMergeBranch(cwd, task);

View File

@ -17,8 +17,10 @@ import {
resolveLanguage,
buildSummaryActionOptions,
selectSummaryAction,
formatMovementPreviews,
type PieceContext,
} from '../../interactive/interactive.js';
import { type RunSessionContext, formatRunSessionForPrompt } from '../../interactive/runSessionReader.js';
import { loadTemplate } from '../../../shared/prompts/index.js';
import { getLabelObject } from '../../../shared/i18n/index.js';
import { loadGlobalConfig } from '../../../infra/config/index.js';
@ -63,11 +65,49 @@ function createSelectInstructAction(ui: InstructUIText): (task: string, lang: 'e
};
}
function buildInstructTemplateVars(
branchContext: string,
branchName: string,
taskName: string,
taskContent: string,
retryNote: string,
lang: 'en' | 'ja',
pieceContext?: PieceContext,
runSessionContext?: RunSessionContext,
): Record<string, string | boolean> {
const hasPiecePreview = !!pieceContext?.movementPreviews?.length;
const movementDetails = hasPiecePreview
? formatMovementPreviews(pieceContext!.movementPreviews!, lang)
: '';
const hasRunSession = !!runSessionContext;
const runPromptVars = hasRunSession
? formatRunSessionForPrompt(runSessionContext)
: { runTask: '', runPiece: '', runStatus: '', runMovementLogs: '', runReports: '' };
return {
taskName,
taskContent,
branchName,
branchContext,
retryNote,
hasPiecePreview,
pieceStructure: pieceContext?.pieceStructure ?? '',
movementDetails,
hasRunSession,
...runPromptVars,
};
}
export async function runInstructMode(
cwd: string,
branchContext: string,
branchName: string,
taskName: string,
taskContent: string,
retryNote: string,
pieceContext?: PieceContext,
runSessionContext?: RunSessionContext,
): Promise<InstructModeResult> {
const globalConfig = loadGlobalConfig();
const lang = resolveLanguage(globalConfig.language);
@ -83,17 +123,11 @@ export async function runInstructMode(
const ui = getLabelObject<InstructUIText>('instruct.ui', ctx.lang);
const systemPrompt = loadTemplate('score_interactive_system_prompt', ctx.lang, {
hasPiecePreview: false,
pieceStructure: '',
movementDetails: '',
});
const branchIntro = ctx.lang === 'ja'
? `## ブランチ: ${branchName}\n\n${branchContext}`
: `## Branch: ${branchName}\n\n${branchContext}`;
const introMessage = `${branchIntro}\n\n${ui.intro}`;
const templateVars = buildInstructTemplateVars(
branchContext, branchName, taskName, taskContent, retryNote, lang,
pieceContext, runSessionContext,
);
const systemPrompt = loadTemplate('score_instruct_system_prompt', ctx.lang, templateVars);
const policyContent = loadTemplate('score_interactive_policy', ctx.lang, {});
@ -111,7 +145,7 @@ export async function runInstructMode(
systemPrompt,
allowedTools: INSTRUCT_TOOLS,
transformPrompt: injectPolicy,
introMessage,
introMessage: ui.intro,
selectAction: createSelectInstructAction(ui),
};

View File

@ -0,0 +1,43 @@
import { confirm } from '../../../shared/prompt/index.js';
import { getLabel } from '../../../shared/i18n/index.js';
import {
selectRun,
loadRunSessionContext,
listRecentRuns,
type RunSessionContext,
} from '../../interactive/index.js';
export function appendRetryNote(existing: string | undefined, additional: string): string {
const trimmedAdditional = additional.trim();
if (trimmedAdditional === '') {
throw new Error('Additional instruction is empty.');
}
if (!existing || existing.trim() === '') {
return trimmedAdditional;
}
return `${existing}\n\n${trimmedAdditional}`;
}
export async function selectRunSessionContext(
projectDir: string,
lang: 'en' | 'ja',
): Promise<RunSessionContext | undefined> {
if (listRecentRuns(projectDir).length === 0) {
return undefined;
}
const shouldReferenceRun = await confirm(
getLabel('interactive.runSelector.confirm', lang),
false,
);
if (!shouldReferenceRun) {
return undefined;
}
const selectedSlug = await selectRun(projectDir, lang);
if (!selectedSlug) {
return undefined;
}
return loadRunSessionContext(projectDir, selectedSlug);
}

View File

@ -65,7 +65,7 @@ export async function showDiffAndPromptActionForTask(
`Action for ${branch}:`,
[
{ label: 'View diff', value: 'diff', description: 'Show full diff in pager' },
{ label: 'Instruct', value: 'instruct', description: 'Give additional instructions via temp clone' },
{ label: 'Instruct', value: 'instruct', description: 'Craft additional instructions and requeue this task' },
{ label: 'Try merge', value: 'try', description: 'Squash merge (stage changes without commit)' },
{ label: 'Merge & cleanup', value: 'merge', description: 'Merge and delete branch' },
{ label: 'Delete', value: 'delete', description: 'Discard changes, delete branch' },

View File

@ -1,23 +1,27 @@
/**
* Instruction actions for completed/failed tasks.
*
* Uses the existing worktree (clone) for conversation and direct re-execution.
* The worktree is preserved after initial execution, so no clone creation is needed.
*/
import * as fs from 'node:fs';
import { execFileSync } from 'node:child_process';
import {
createTempCloneForBranch,
removeClone,
removeCloneMeta,
TaskRunner,
detectDefaultBranch,
} from '../../../infra/task/index.js';
import { loadGlobalConfig, getPieceDescription } from '../../../infra/config/index.js';
import { info, success, error as logError } from '../../../shared/ui/index.js';
import { info, error as logError } from '../../../shared/ui/index.js';
import { createLogger, getErrorMessage } from '../../../shared/utils/index.js';
import { executeTask } from '../execute/taskExecution.js';
import type { TaskExecutionOptions } from '../execute/types.js';
import { buildBooleanTaskResult, persistTaskError, persistTaskResult } from '../execute/taskResultHandler.js';
import { runInstructMode } from './instructMode.js';
import { saveTaskFile } from '../add/index.js';
import { selectPiece } from '../../pieceSelection/index.js';
import { dispatchConversationAction } from '../../interactive/actionDispatcher.js';
import type { PieceContext } from '../../interactive/interactive.js';
import { type BranchActionTarget, resolveTargetBranch, resolveTargetWorktreePath } from './taskActionTarget.js';
import { detectDefaultBranch, autoCommitAndPush } from '../../../infra/task/index.js';
import { resolveLanguage } from '../../interactive/index.js';
import { type BranchActionTarget, resolveTargetBranch } from './taskActionTarget.js';
import { appendRetryNote, selectRunSessionContext } from './requeueHelpers.js';
import { executeAndCompleteTask } from '../execute/taskExecution.js';
const log = createLogger('list-tasks');
@ -70,10 +74,18 @@ function getBranchContext(projectDir: string, branch: string): string {
export async function instructBranch(
projectDir: string,
target: BranchActionTarget,
options?: TaskExecutionOptions,
): Promise<boolean> {
if (!('kind' in target)) {
throw new Error('Instruct requeue requires a task target.');
}
if (!target.worktreePath || !fs.existsSync(target.worktreePath)) {
logError(`Worktree directory does not exist for task: ${target.name}`);
return false;
}
const worktreePath = target.worktreePath;
const branch = resolveTargetBranch(target);
const worktreePath = resolveTargetWorktreePath(target);
const selectedPiece = await selectPiece(projectDir);
if (!selectedPiece) {
@ -90,96 +102,45 @@ export async function instructBranch(
movementPreviews: pieceDesc.movementPreviews,
};
const lang = resolveLanguage(globalConfig.language);
// Runs data lives in the worktree (written during previous execution)
const runSessionContext = await selectRunSessionContext(worktreePath, lang);
const branchContext = getBranchContext(projectDir, branch);
const result = await runInstructMode(projectDir, branchContext, branch, pieceContext);
const result = await runInstructMode(
worktreePath, branchContext, branch,
target.name, target.content, target.data?.retry_note ?? '',
pieceContext, runSessionContext,
);
const executeWithInstruction = async (instruction: string): Promise<boolean> => {
const retryNote = appendRetryNote(target.data?.retry_note, instruction);
const runner = new TaskRunner(projectDir);
const taskInfo = runner.startReExecution(target.name, ['completed', 'failed'], undefined, retryNote);
log.info('Starting re-execution of instructed task', {
name: target.name,
worktreePath,
branch,
piece: selectedPiece,
});
return executeAndCompleteTask(taskInfo, runner, projectDir, selectedPiece);
};
return dispatchConversationAction(result, {
cancel: () => {
info('Cancelled');
return false;
},
execute: async ({ task }) => executeWithInstruction(task),
save_task: async ({ task }) => {
const created = await saveTaskFile(projectDir, task, {
piece: selectedPiece,
worktree: true,
branch,
autoPr: false,
});
success(`Task saved: ${created.taskName}`);
info(` Branch: ${branch}`);
log.info('Task saved from instruct mode', { branch, piece: selectedPiece });
const retryNote = appendRetryNote(target.data?.retry_note, task);
const runner = new TaskRunner(projectDir);
runner.requeueTask(target.name, ['completed', 'failed'], undefined, retryNote);
info(`Task "${target.name}" has been requeued.`);
return true;
},
execute: async ({ task }) => {
log.info('Instructing branch via temp clone', { branch, piece: selectedPiece });
info(`Running instruction on ${branch}...`);
const clone = createTempCloneForBranch(projectDir, branch);
const fullInstruction = branchContext
? `${branchContext}## 追加指示\n${task}`
: task;
const runner = new TaskRunner(projectDir);
const taskRecord = runner.addTask(fullInstruction, {
piece: selectedPiece,
worktree: true,
branch,
auto_pr: false,
...(worktreePath ? { worktree_path: worktreePath } : {}),
});
const startedAt = new Date().toISOString();
try {
const taskSuccess = await executeTask({
task: fullInstruction,
cwd: clone.path,
pieceIdentifier: selectedPiece,
projectCwd: projectDir,
agentOverrides: options,
});
const completedAt = new Date().toISOString();
const taskResult = buildBooleanTaskResult({
task: taskRecord,
taskSuccess,
successResponse: 'Instruction completed',
failureResponse: 'Instruction failed',
startedAt,
completedAt,
branch,
...(worktreePath ? { worktreePath } : {}),
});
persistTaskResult(runner, taskResult, { emitStatusLog: false });
if (taskSuccess) {
const commitResult = autoCommitAndPush(clone.path, task, projectDir);
if (commitResult.success && commitResult.commitHash) {
success(`Auto-committed & pushed: ${commitResult.commitHash}`);
} else if (!commitResult.success) {
logError(`Auto-commit failed: ${commitResult.message}`);
}
success(`Instruction completed on ${branch}`);
log.info('Instruction completed', { branch });
} else {
logError(`Instruction failed on ${branch}`);
log.error('Instruction failed', { branch });
}
return taskSuccess;
} catch (err) {
const completedAt = new Date().toISOString();
persistTaskError(runner, taskRecord, startedAt, completedAt, err, {
emitStatusLog: false,
responsePrefix: 'Instruction failed: ',
});
logError(`Instruction failed on ${branch}`);
log.error('Instruction crashed', { branch, error: getErrorMessage(err) });
throw err;
} finally {
removeClone(clone.path);
removeCloneMeta(projectDir, branch);
}
},
});
}

View File

@ -1,17 +1,31 @@
/**
* Retry actions for failed tasks.
*
* Provides interactive retry functionality including
* failure info display and movement selection.
* Uses the existing worktree (clone) for conversation and direct re-execution.
* The worktree is preserved after initial execution, so no clone creation is needed.
*/
import * as fs from 'node:fs';
import type { TaskListItem } from '../../../infra/task/index.js';
import { TaskRunner } from '../../../infra/task/index.js';
import { loadPieceByIdentifier, loadGlobalConfig } from '../../../infra/config/index.js';
import { selectOption, promptInput } from '../../../shared/prompt/index.js';
import { success, error as logError, info, header, blankLine, status } from '../../../shared/ui/index.js';
import { createLogger, getErrorMessage } from '../../../shared/utils/index.js';
import { loadPieceByIdentifier, loadGlobalConfig, getPieceDescription } from '../../../infra/config/index.js';
import { selectPiece } from '../../pieceSelection/index.js';
import { selectOption } from '../../../shared/prompt/index.js';
import { info, header, blankLine, status } from '../../../shared/ui/index.js';
import { createLogger } from '../../../shared/utils/index.js';
import type { PieceConfig } from '../../../core/models/index.js';
import {
findRunForTask,
loadRunSessionContext,
getRunPaths,
formatRunSessionForPrompt,
runRetryMode,
type RetryContext,
type RetryFailureInfo,
type RetryRunInfo,
} from '../../interactive/index.js';
import { executeAndCompleteTask } from '../execute/taskExecution.js';
import { appendRetryNote } from './requeueHelpers.js';
const log = createLogger('list-tasks');
@ -53,23 +67,77 @@ async function selectStartMovement(
return await selectOption<string>('Start from movement:', options);
}
function buildRetryFailureInfo(task: TaskListItem): RetryFailureInfo {
return {
taskName: task.name,
taskContent: task.content,
createdAt: task.createdAt,
failedMovement: task.failure?.movement ?? '',
error: task.failure?.error ?? '',
lastMessage: task.failure?.last_message ?? '',
retryNote: task.data?.retry_note ?? '',
};
}
function buildRetryRunInfo(
runsBaseDir: string,
slug: string,
): RetryRunInfo {
const paths = getRunPaths(runsBaseDir, slug);
const sessionContext = loadRunSessionContext(runsBaseDir, slug);
const formatted = formatRunSessionForPrompt(sessionContext);
return {
logsDir: paths.logsDir,
reportsDir: paths.reportsDir,
task: formatted.runTask,
piece: formatted.runPiece,
status: formatted.runStatus,
movementLogs: formatted.runMovementLogs,
reports: formatted.runReports,
};
}
function resolveWorktreePath(task: TaskListItem): string {
if (!task.worktreePath) {
throw new Error(`Worktree path is not set for task: ${task.name}`);
}
if (!fs.existsSync(task.worktreePath)) {
throw new Error(`Worktree directory does not exist: ${task.worktreePath}`);
}
return task.worktreePath;
}
/**
* Retry a failed task.
*
* @returns true if task was requeued, false if cancelled
* Runs the retry conversation in the existing worktree, then directly
* re-executes the task there (auto-commit + push + status update).
*
* @returns true if task was re-executed successfully, false if cancelled or failed
*/
export async function retryFailedTask(
task: TaskListItem,
projectDir: string,
): Promise<boolean> {
if (task.kind !== 'failed') {
throw new Error(`retryFailedTask requires failed task. received: ${task.kind}`);
}
const worktreePath = resolveWorktreePath(task);
displayFailureInfo(task);
const pieceName = task.data?.piece ?? loadGlobalConfig().defaultPiece ?? 'default';
const pieceConfig = loadPieceByIdentifier(pieceName, projectDir);
const selectedPiece = await selectPiece(projectDir);
if (!selectedPiece) {
info('Cancelled');
return false;
}
const globalConfig = loadGlobalConfig();
const pieceConfig = loadPieceByIdentifier(selectedPiece, projectDir);
if (!pieceConfig) {
logError(`Piece "${pieceName}" not found. Cannot determine available movements.`);
return false;
throw new Error(`Piece "${selectedPiece}" not found after selection.`);
}
const selectedMovement = await selectStartMovement(pieceConfig, task.failure?.movement ?? null);
@ -77,39 +145,51 @@ export async function retryFailedTask(
return false;
}
blankLine();
const retryNote = await promptInput('Retry note (optional, press Enter to skip):');
const trimmedNote = retryNote?.trim();
const pieceDesc = getPieceDescription(selectedPiece, projectDir, globalConfig.interactivePreviewMovements);
const pieceContext = {
name: pieceDesc.name,
description: pieceDesc.description,
pieceStructure: pieceDesc.pieceStructure,
movementPreviews: pieceDesc.movementPreviews,
};
// Runs data lives in the worktree (written during previous execution)
const matchedSlug = findRunForTask(worktreePath, task.content);
const runInfo = matchedSlug ? buildRetryRunInfo(worktreePath, matchedSlug) : null;
blankLine();
const branchName = task.branch ?? task.name;
const retryContext: RetryContext = {
failure: buildRetryFailureInfo(task),
branchName,
pieceContext,
run: runInfo,
};
const retryResult = await runRetryMode(worktreePath, retryContext);
if (retryResult.action === 'cancel') {
return false;
}
try {
const runner = new TaskRunner(projectDir);
const startMovement = selectedMovement !== pieceConfig.initialMovement
? selectedMovement
: undefined;
const retryNote = appendRetryNote(task.data?.retry_note, retryResult.task);
const runner = new TaskRunner(projectDir);
runner.requeueFailedTask(task.name, startMovement, trimmedNote || undefined);
success(`Task requeued: ${task.name}`);
if (startMovement) {
info(` Will start from: ${startMovement}`);
if (retryResult.action === 'save_task') {
runner.requeueTask(task.name, ['failed'], startMovement, retryNote);
info(`Task "${task.name}" has been requeued.`);
return true;
}
if (trimmedNote) {
info(` Retry note: ${trimmedNote}`);
}
info(` File: ${task.filePath}`);
log.info('Requeued failed task', {
const taskInfo = runner.startReExecution(task.name, ['failed'], startMovement, retryNote);
log.info('Starting re-execution of failed task', {
name: task.name,
tasksFile: task.filePath,
worktreePath,
startMovement,
retryNote: trimmedNote,
});
return true;
} catch (err) {
const msg = getErrorMessage(err);
logError(`Failed to requeue task: ${msg}`);
log.error('Failed to requeue task', { name: task.name, error: msg });
return false;
}
return executeAndCompleteTask(taskInfo, runner, projectDir, selectedPiece);
}

View File

@ -1,7 +1,7 @@
/**
* Global configuration loader
*
* Manages ~/.takt/config.yaml and project-level debug settings.
* Manages ~/.takt/config.yaml.
* GlobalConfigManager encapsulates the config cache as a singleton.
*/
@ -9,10 +9,10 @@ import { readFileSync, existsSync, writeFileSync, statSync, accessSync, constant
import { isAbsolute } from 'node:path';
import { parse as parseYaml, stringify as stringifyYaml } from 'yaml';
import { GlobalConfigSchema } from '../../../core/models/index.js';
import type { GlobalConfig, DebugConfig, Language } from '../../../core/models/index.js';
import type { GlobalConfig, Language } from '../../../core/models/index.js';
import type { ProviderPermissionProfiles } from '../../../core/models/provider-profiles.js';
import { normalizeProviderOptions } from '../loaders/pieceParser.js';
import { getGlobalConfigPath, getProjectConfigPath } from '../paths.js';
import { getGlobalConfigPath } from '../paths.js';
import { DEFAULT_LANGUAGE } from '../../../shared/constants.js';
import { parseProviderModel } from '../../../shared/utils/providerModel.js';
@ -168,10 +168,6 @@ export class GlobalConfigManager {
logLevel: parsed.log_level,
provider: parsed.provider,
model: parsed.model,
debug: parsed.debug ? {
enabled: parsed.debug.enabled,
logFile: parsed.debug.log_file,
} : undefined,
observability: parsed.observability ? {
providerEvents: parsed.observability.provider_events,
} : undefined,
@ -228,12 +224,6 @@ export class GlobalConfigManager {
if (config.model) {
raw.model = config.model;
}
if (config.debug) {
raw.debug = {
enabled: config.debug.enabled,
log_file: config.debug.logFile,
};
}
if (config.observability && config.observability.providerEvents !== undefined) {
raw.observability = {
provider_events: config.observability.providerEvents,
@ -458,41 +448,3 @@ export function resolveOpencodeApiKey(): string | undefined {
}
}
/** Load project-level debug configuration (from .takt/config.yaml) */
export function loadProjectDebugConfig(projectDir: string): DebugConfig | undefined {
const configPath = getProjectConfigPath(projectDir);
if (!existsSync(configPath)) {
return undefined;
}
try {
const content = readFileSync(configPath, 'utf-8');
const raw = parseYaml(content);
if (raw && typeof raw === 'object' && 'debug' in raw) {
const debug = raw.debug;
if (debug && typeof debug === 'object') {
return {
enabled: Boolean(debug.enabled),
logFile: typeof debug.log_file === 'string' ? debug.log_file : undefined,
};
}
}
} catch {
// Ignore parse errors
}
return undefined;
}
/** Get effective debug config (project overrides global) */
export function getEffectiveDebugConfig(projectDir?: string): DebugConfig | undefined {
const globalConfig = loadGlobalConfig();
let debugConfig = globalConfig.debug;
if (projectDir) {
const projectDebugConfig = loadProjectDebugConfig(projectDir);
if (projectDebugConfig) {
debugConfig = projectDebugConfig;
}
}
return debugConfig;
}

View File

@ -16,8 +16,6 @@ export {
resolveOpenaiApiKey,
resolveCodexCliPath,
resolveOpencodeApiKey,
loadProjectDebugConfig,
getEffectiveDebugConfig,
} from './globalConfig.js';
export {

View File

@ -28,6 +28,4 @@ export {
loadGlobalConfig,
saveGlobalConfig,
invalidateGlobalConfigCache,
loadProjectDebugConfig,
getEffectiveDebugConfig,
} from '../global/globalConfig.js';

View File

@ -58,3 +58,4 @@ export { stageAndCommit, getCurrentBranch } from './git.js';
export { autoCommitAndPush, type AutoCommitResult } from './autoCommit.js';
export { summarizeTaskName } from './summarize.js';
export { TaskWatcher, type TaskWatcherOptions } from './watcher.js';
export { isStaleRunningTask } from './process.js';

View File

@ -121,6 +121,9 @@ function toBaseTaskListItem(projectDir: string, tasksFile: string, task: TaskRec
content: firstLine(resolveTaskContent(projectDir, task)),
branch: task.branch,
worktreePath: task.worktree_path,
startedAt: task.started_at ?? undefined,
completedAt: task.completed_at ?? undefined,
ownerPid: task.owner_pid ?? undefined,
data: toTaskData(projectDir, task),
};
}

23
src/infra/task/process.ts Normal file
View File

@ -0,0 +1,23 @@
/**
* Shared process-level helpers.
*/
export function isProcessAlive(ownerPid: number): boolean {
try {
process.kill(ownerPid, 0);
return true;
} catch (err) {
const nodeErr = err as NodeJS.ErrnoException;
if (nodeErr.code === 'ESRCH') {
return false;
}
if (nodeErr.code === 'EPERM') {
return true;
}
throw err;
}
}
export function isStaleRunningTask(ownerPid: number | undefined): boolean {
return ownerPid == null || !isProcessAlive(ownerPid);
}

View File

@ -1,5 +1,6 @@
import type { TaskFileData } from './schema.js';
import type { TaskInfo, TaskResult, TaskListItem } from './types.js';
import type { TaskStatus } from './schema.js';
import { TaskStore } from './store.js';
import { TaskLifecycleService } from './taskLifecycleService.js';
import { TaskQueryService } from './taskQueryService.js';
@ -73,6 +74,24 @@ export class TaskRunner {
return this.lifecycle.requeueFailedTask(taskRef, startMovement, retryNote);
}
requeueTask(
taskRef: string,
allowedStatuses: readonly TaskStatus[],
startMovement?: string,
retryNote?: string,
): string {
return this.lifecycle.requeueTask(taskRef, allowedStatuses, startMovement, retryNote);
}
startReExecution(
taskRef: string,
allowedStatuses: readonly TaskStatus[],
startMovement?: string,
retryNote?: string,
): TaskInfo {
return this.lifecycle.startReExecution(taskRef, allowedStatuses, startMovement, retryNote);
}
deletePendingTask(name: string): void {
this.deletion.deletePendingTask(name);
}

View File

@ -4,6 +4,8 @@ import type { TaskInfo, TaskResult } from './types.js';
import { toTaskInfo } from './mapper.js';
import { TaskStore } from './store.js';
import { firstLine, nowIso, sanitizeTaskName } from './naming.js';
import { isStaleRunningTask } from './process.js';
import type { TaskStatus } from './schema.js';
export class TaskLifecycleService {
constructor(
@ -151,12 +153,68 @@ export class TaskLifecycleService {
}
requeueFailedTask(taskRef: string, startMovement?: string, retryNote?: string): string {
return this.requeueTask(taskRef, ['failed'], startMovement, retryNote);
}
/**
* Atomically transition a completed/failed task to running for re-execution.
* Avoids the race condition of requeueTask( pending) + claimNextTasks( running).
*/
startReExecution(
taskRef: string,
allowedStatuses: readonly TaskStatus[],
startMovement?: string,
retryNote?: string,
): TaskInfo {
const taskName = this.normalizeTaskRef(taskRef);
let found: TaskRecord | undefined;
this.store.update((current) => {
const index = current.tasks.findIndex((task) => (
task.name === taskName
&& allowedStatuses.includes(task.status)
));
if (index === -1) {
const expectedStatuses = allowedStatuses.join(', ');
throw new Error(`Task not found for re-execution: ${taskRef} (expected status: ${expectedStatuses})`);
}
const target = current.tasks[index]!;
const updated: TaskRecord = {
...target,
status: 'running',
started_at: nowIso(),
owner_pid: process.pid,
failure: undefined,
start_movement: startMovement,
retry_note: retryNote,
};
found = updated;
const tasks = [...current.tasks];
tasks[index] = updated;
return { tasks };
});
return toTaskInfo(this.projectDir, this.tasksFile, found!);
}
requeueTask(
taskRef: string,
allowedStatuses: readonly TaskStatus[],
startMovement?: string,
retryNote?: string,
): string {
const taskName = this.normalizeTaskRef(taskRef);
this.store.update((current) => {
const index = current.tasks.findIndex((task) => task.name === taskName && task.status === 'failed');
const index = current.tasks.findIndex((task) => (
task.name === taskName
&& allowedStatuses.includes(task.status)
));
if (index === -1) {
throw new Error(`Failed task not found: ${taskRef}`);
const expectedStatuses = allowedStatuses.join(', ');
throw new Error(`Task not found for requeue: ${taskRef} (expected status: ${expectedStatuses})`);
}
const target = current.tasks[index]!;
@ -197,26 +255,7 @@ export class TaskLifecycleService {
}
private isRunningTaskStale(task: TaskRecord): boolean {
if (task.owner_pid == null) {
return true;
}
return !this.isProcessAlive(task.owner_pid);
}
private isProcessAlive(pid: number): boolean {
try {
process.kill(pid, 0);
return true;
} catch (err) {
const nodeErr = err as NodeJS.ErrnoException;
if (nodeErr.code === 'ESRCH') {
return false;
}
if (nodeErr.code === 'EPERM') {
return true;
}
throw err;
}
return isStaleRunningTask(task.owner_pid ?? undefined);
}
private generateTaskName(content: string, existingNames: string[]): string {

View File

@ -85,4 +85,7 @@ export interface TaskListItem {
worktreePath?: string;
data?: TaskFileData;
failure?: TaskFailure;
startedAt?: string;
completedAt?: string;
ownerPid?: number;
}

View File

@ -35,6 +35,10 @@ interactive:
quietDescription: "Generate instructions without asking questions"
passthrough: "Passthrough"
passthroughDescription: "Pass your input directly as task text"
runSelector:
confirm: "Reference a previous run's results?"
prompt: "Select a run to reference:"
noRuns: "No previous runs found."
sessionSelector:
confirm: "Choose a previous session?"
prompt: "Resume from a recent session?"

View File

@ -35,6 +35,10 @@ interactive:
quietDescription: "質問なしでベストエフォートの指示書を生成"
passthrough: "パススルー"
passthroughDescription: "入力をそのままタスクとして渡す"
runSelector:
confirm: "前回の実行結果を参照しますか?"
prompt: "参照するrunを選択してください:"
noRuns: "前回のrunが見つかりませんでした。"
sessionSelector:
confirm: "前回セッションを選択しますか?"
prompt: "直近のセッションを引き継ぎますか?"

View File

@ -0,0 +1,87 @@
<!--
template: score_instruct_system_prompt
role: system prompt for instruct assistant mode (completed/failed tasks)
vars: taskName, taskContent, branchName, branchContext, retryNote, hasPiecePreview, pieceStructure, movementDetails, hasRunSession, runTask, runPiece, runStatus, runMovementLogs, runReports
caller: features/tasks/list/instructMode
-->
# Additional Instruction Assistant
Reviews completed task artifacts and creates additional instructions for re-execution.
## How TAKT Works
1. **Additional Instruction Assistant (your role)**: Review branch changes and execution results, then converse with users to create additional instructions for re-execution
2. **Piece Execution**: Pass the created instructions to the piece, where multiple AI agents execute sequentially
## Role Boundaries
**Do:**
- Explain the current situation based on branch changes (diffs, commit history)
- Answer user questions with awareness of the change context
- Create concrete additional instructions for the work that still needs to be done
**Don't:**
- Fix code (piece's job)
- Execute tasks directly (piece's job)
- Mention slash commands
## Task Information
**Task name:** {{taskName}}
**Original instruction:** {{taskContent}}
**Branch:** {{branchName}}
## Branch Changes
{{branchContext}}
{{#if retryNote}}
## Existing Retry Note
Instructions added from previous attempts.
{{retryNote}}
{{/if}}
{{#if hasPiecePreview}}
## Piece Structure
This task will be processed through the following workflow:
{{pieceStructure}}
### Agent Details
The following agents will process the task sequentially. Understand each agent's capabilities and instructions to improve the quality of your task instructions.
{{movementDetails}}
### Delegation Guidance
- Do not include excessive detail in instructions for things the agents above can investigate and determine on their own
- Clearly include information that agents cannot resolve on their own (user intent, priorities, constraints, etc.)
- Delegate codebase investigation, implementation details, and dependency analysis to the agents
{{/if}}
{{#if hasRunSession}}
## Previous Run Reference
The user has selected a previous run for reference. Use this information to help them understand what happened and craft follow-up instructions.
**Task:** {{runTask}}
**Piece:** {{runPiece}}
**Status:** {{runStatus}}
### Movement Logs
{{runMovementLogs}}
### Reports
{{runReports}}
### Guidance
- Reference specific movement results when discussing issues or improvements
- Help the user identify what went wrong or what needs additional work
- Suggest concrete follow-up instructions based on the run results
{{/if}}

View File

@ -1,7 +1,7 @@
<!--
template: score_interactive_system_prompt
role: system prompt for interactive planning mode
vars: hasPiecePreview, pieceStructure, movementDetails
vars: hasPiecePreview, pieceStructure, movementDetails, hasRunSession, runTask, runPiece, runStatus, runMovementLogs, runReports
caller: features/interactive
-->
# Interactive Mode Assistant
@ -43,3 +43,27 @@ The following agents will process the task sequentially. Understand each agent's
- Clearly include information that agents cannot resolve on their own (user intent, priorities, constraints, etc.)
- Delegate codebase investigation, implementation details, and dependency analysis to the agents
{{/if}}
{{#if hasRunSession}}
## Previous Run Reference
The user has selected a previous run for reference. Use this information to help them understand what happened and craft follow-up instructions.
**Task:** {{runTask}}
**Piece:** {{runPiece}}
**Status:** {{runStatus}}
### Movement Logs
{{runMovementLogs}}
### Reports
{{runReports}}
### Guidance
- Reference specific movement results when discussing issues or improvements
- Help the user identify what went wrong or what needs additional work
- Suggest concrete follow-up instructions based on the run results
{{/if}}

View File

@ -0,0 +1,97 @@
<!--
template: score_retry_system_prompt
role: system prompt for retry assistant mode
vars: taskName, taskContent, branchName, createdAt, failedMovement, failureError, failureLastMessage, retryNote, hasPiecePreview, pieceStructure, movementDetails, hasRun, runLogsDir, runReportsDir, runTask, runPiece, runStatus, runMovementLogs, runReports
caller: features/interactive/retryMode
-->
# Retry Assistant
Diagnoses failed tasks and creates additional instructions for re-execution.
## How TAKT Works
1. **Retry Assistant (your role)**: Analyze failure causes and converse with users to create instructions for re-execution
2. **Piece Execution**: Pass the created instructions to the piece, where multiple AI agents execute sequentially
## Role Boundaries
**Do:**
- Analyze failure information and explain possible causes to the user
- Answer user questions with awareness of the failure context
- Create concrete additional instructions that will help the re-execution succeed
**Don't:**
- Fix code (piece's job)
- Execute tasks directly (piece's job)
- Mention slash commands
## Failure Information
**Task name:** {{taskName}}
**Original instruction:** {{taskContent}}
**Branch:** {{branchName}}
**Failed at:** {{createdAt}}
{{#if failedMovement}}
**Failed movement:** {{failedMovement}}
{{/if}}
**Error:** {{failureError}}
{{#if failureLastMessage}}
### Last Message
{{failureLastMessage}}
{{/if}}
{{#if retryNote}}
## Existing Retry Note
Instructions added from previous retry attempts.
{{retryNote}}
{{/if}}
{{#if hasPiecePreview}}
## Piece Structure
This task will be processed through the following workflow:
{{pieceStructure}}
### Agent Details
The following agents will process the task sequentially. Understand each agent's capabilities and instructions to improve the quality of your task instructions.
{{movementDetails}}
### Delegation Guidance
- Do not include excessive detail in instructions for things the agents above can investigate and determine on their own
- Clearly include information that agents cannot resolve on their own (user intent, priorities, constraints, etc.)
- Delegate codebase investigation, implementation details, and dependency analysis to the agents
{{/if}}
{{#if hasRun}}
## Previous Run Data
Logs and reports from the previous execution are available for reference. Use them to identify the failure cause.
**Logs directory:** {{runLogsDir}}
**Reports directory:** {{runReportsDir}}
**Task:** {{runTask}}
**Piece:** {{runPiece}}
**Status:** {{runStatus}}
### Movement Logs
{{runMovementLogs}}
### Reports
{{runReports}}
### Analysis Guidance
- Focus on the movement logs where the error occurred
- Cross-reference the plans and implementation recorded in reports with the actual failure point
- If the user wants more details, files in the directories above can be read using the Read tool
{{/if}}

View File

@ -1,7 +1,7 @@
<!--
template: score_summary_system_prompt
role: system prompt for conversation-to-task summarization
vars: pieceInfo, pieceName, pieceDescription, movementDetails, conversation
vars: pieceInfo, pieceName, pieceDescription, movementDetails, taskHistory, conversation
caller: features/interactive
-->
You are a task summarizer. Convert the conversation into a concrete task instruction for the planning step.
@ -31,3 +31,7 @@ Create the instruction in the format expected by this piece.
{{conversation}}
{{/if}}
{{#if taskHistory}}
{{taskHistory}}
{{/if}}

View File

@ -0,0 +1,87 @@
<!--
template: score_instruct_system_prompt
role: system prompt for instruct assistant mode (completed/failed tasks)
vars: taskName, taskContent, branchName, branchContext, retryNote, hasPiecePreview, pieceStructure, movementDetails, hasRunSession, runTask, runPiece, runStatus, runMovementLogs, runReports
caller: features/tasks/list/instructMode
-->
# 追加指示アシスタント
完了済みタスクの成果物を確認し、再実行のための追加指示を作成する。
## TAKTの仕組み
1. **追加指示アシスタント(あなたの役割)**: ブランチの変更内容と実行結果を確認し、ユーザーと対話して再実行用の追加指示を作成する
2. **ピース実行**: 作成した指示書をピースに渡し、複数のAIエージェントが順次実行する
## 役割の境界
**やること:**
- ブランチの変更内容(差分・コミット履歴)を踏まえて状況を説明する
- ユーザーの質問に変更コンテキストを踏まえて回答する
- 追加で必要な作業を具体的な指示として作成する
**やらないこと:**
- コードの修正(ピースの仕事)
- タスクの直接実行(ピースの仕事)
- スラッシュコマンドへの言及
## タスク情報
**タスク名:** {{taskName}}
**元の指示:** {{taskContent}}
**ブランチ:** {{branchName}}
## ブランチの変更内容
{{branchContext}}
{{#if retryNote}}
## 既存の再投入メモ
以前の追加指示で設定された内容です。
{{retryNote}}
{{/if}}
{{#if hasPiecePreview}}
## ピース構成
このタスクは以下のワークフローで処理されます:
{{pieceStructure}}
### エージェント詳細
以下のエージェントが順次タスクを処理します。各エージェントの能力と指示内容を理解し、指示書の質を高めてください。
{{movementDetails}}
### 委譲ガイダンス
- 上記エージェントが自ら調査・判断できる内容は、指示書に過度な詳細を含める必要はありません
- エージェントが自力で解決できない情報(ユーザーの意図、優先度、制約条件など)を指示書に明確に含めてください
- コードベースの調査、実装詳細の特定、依存関係の解析はエージェントに委ねてください
{{/if}}
{{#if hasRunSession}}
## 前回実行の参照
ユーザーが前回の実行結果を参照として選択しました。この情報を使って、何が起きたかを理解し、追加指示の作成を支援してください。
**タスク:** {{runTask}}
**ピース:** {{runPiece}}
**ステータス:** {{runStatus}}
### ムーブメントログ
{{runMovementLogs}}
### レポート
{{runReports}}
### ガイダンス
- 問題点や改善点を議論する際は、具体的なムーブメントの結果を参照してください
- 何がうまくいかなかったか、追加作業が必要な箇所をユーザーが特定できるよう支援してください
- 実行結果に基づいて、具体的なフォローアップ指示を提案してください
{{/if}}

View File

@ -1,7 +1,7 @@
<!--
template: score_interactive_system_prompt
role: system prompt for interactive planning mode
vars: hasPiecePreview, pieceStructure, movementDetails
vars: hasPiecePreview, pieceStructure, movementDetails, hasRunSession, runTask, runPiece, runStatus, runMovementLogs, runReports
caller: features/interactive
-->
# 対話モードアシスタント
@ -43,3 +43,27 @@ TAKTの対話モードを担当し、ユーザーと会話してピース実行
- エージェントが自力で解決できない情報(ユーザーの意図、優先度、制約条件など)を指示書に明確に含めてください
- コードベースの調査、実装詳細の特定、依存関係の解析はエージェントに委ねてください
{{/if}}
{{#if hasRunSession}}
## 前回実行の参照
ユーザーが前回の実行結果を参照として選択しました。この情報を使って、何が起きたかを理解し、追加指示の作成を支援してください。
**タスク:** {{runTask}}
**ピース:** {{runPiece}}
**ステータス:** {{runStatus}}
### ムーブメントログ
{{runMovementLogs}}
### レポート
{{runReports}}
### ガイダンス
- 問題点や改善点を議論する際は、具体的なムーブメントの結果を参照してください
- 何がうまくいかなかったか、追加作業が必要な箇所をユーザーが特定できるよう支援してください
- 実行結果に基づいて、具体的なフォローアップ指示を提案してください
{{/if}}

View File

@ -0,0 +1,97 @@
<!--
template: score_retry_system_prompt
role: system prompt for retry assistant mode
vars: taskName, taskContent, branchName, createdAt, failedMovement, failureError, failureLastMessage, retryNote, hasPiecePreview, pieceStructure, movementDetails, hasRun, runLogsDir, runReportsDir, runTask, runPiece, runStatus, runMovementLogs, runReports
caller: features/interactive/retryMode
-->
# リトライアシスタント
失敗したタスクの診断と、再実行のための追加指示作成を担当する。
## TAKTの仕組み
1. **リトライアシスタント(あなたの役割)**: 失敗原因を分析し、ユーザーと対話して再実行用の指示書を作成する
2. **ピース実行**: 作成した指示書をピースに渡し、複数のAIエージェントが順次実行する
## 役割の境界
**やること:**
- 失敗情報を分析し、考えられる原因をユーザーに説明する
- ユーザーの質問に失敗コンテキストを踏まえて回答する
- 再実行時に成功するための具体的な追加指示を作成する
**やらないこと:**
- コードの修正(ピースの仕事)
- タスクの直接実行(ピースの仕事)
- スラッシュコマンドへの言及
## 失敗情報
**タスク名:** {{taskName}}
**元の指示:** {{taskContent}}
**ブランチ:** {{branchName}}
**失敗日時:** {{createdAt}}
{{#if failedMovement}}
**失敗ムーブメント:** {{failedMovement}}
{{/if}}
**エラー:** {{failureError}}
{{#if failureLastMessage}}
### 最終メッセージ
{{failureLastMessage}}
{{/if}}
{{#if retryNote}}
## 既存の再投入メモ
以前のリトライで追加された指示です。
{{retryNote}}
{{/if}}
{{#if hasPiecePreview}}
## ピース構成
このタスクは以下のワークフローで処理されます:
{{pieceStructure}}
### エージェント詳細
以下のエージェントが順次タスクを処理します。各エージェントの能力と指示内容を理解し、指示書の質を高めてください。
{{movementDetails}}
### 委譲ガイダンス
- 上記エージェントが自ら調査・判断できる内容は、指示書に過度な詳細を含める必要はありません
- エージェントが自力で解決できない情報(ユーザーの意図、優先度、制約条件など)を指示書に明確に含めてください
- コードベースの調査、実装詳細の特定、依存関係の解析はエージェントに委ねてください
{{/if}}
{{#if hasRun}}
## 前回実行データ
前回の実行ログとレポートを参照できます。失敗原因の特定に活用してください。
**ログディレクトリ:** {{runLogsDir}}
**レポートディレクトリ:** {{runReportsDir}}
**タスク:** {{runTask}}
**ピース:** {{runPiece}}
**ステータス:** {{runStatus}}
### ムーブメントログ
{{runMovementLogs}}
### レポート
{{runReports}}
### 分析ガイダンス
- エラーが発生したムーブメントのログを重点的に確認してください
- レポートに記録された計画や実装内容と、実際の失敗箇所を照合してください
- ユーザーが詳細を知りたい場合は、上記ディレクトリのファイルを Read ツールで参照できます
{{/if}}

View File

@ -1,7 +1,7 @@
<!--
template: score_summary_system_prompt
role: system prompt for conversation-to-task summarization
vars: pieceInfo, pieceName, pieceDescription, movementDetails, conversation
vars: pieceInfo, pieceName, pieceDescription, movementDetails, taskHistory, conversation
caller: features/interactive
-->
あなたはTAKTの対話モードを担当しています。これまでの会話内容を、ピース実行用の具体的なタスク指示書に変換してください。
@ -38,3 +38,7 @@
{{conversation}}
{{/if}}
{{#if taskHistory}}
{{taskHistory}}
{{/if}}