commit
77596d5987
35
CHANGELOG.md
35
CHANGELOG.md
@ -4,6 +4,41 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
|
||||
|
||||
## [0.9.0] - 2026-02-08
|
||||
|
||||
### Added
|
||||
|
||||
- **`takt catalog` command**: List available facets (personas, policies, knowledge, instructions, output-contracts) across layers (builtin/user/project)
|
||||
- **`compound-eye` builtin piece**: Multi-model review — sends the same instruction to Claude and Codex simultaneously, then synthesizes both responses
|
||||
- **Parallel task execution**: `takt run` now uses a worker pool for concurrent task execution (controlled by `concurrency` config, default: 1)
|
||||
- **Rich line editor in interactive mode**: Shift+Enter for multiline input, cursor movement (arrow keys, Home/End), Option+Arrow word movement, Ctrl+A/E/K/U/W editing, paste bracket mode support
|
||||
- **Movement preview in interactive mode**: Injects piece movement structure (persona + instruction) into the AI planner for improved task analysis (`interactive_preview_movements` config, default: 3)
|
||||
- **MCP server configuration**: Per-movement MCP (Model Context Protocol) server settings with stdio/SSE/HTTP transport support
|
||||
- **Facet-level eject**: `takt eject persona coder` — eject individual facets by type and name for customization
|
||||
- **3-layer facet resolution**: Personas, policies, and other facets resolved via project → user → builtin lookup (name-based references supported)
|
||||
- **`pr-commenter` persona**: Specialized persona for posting review findings as GitHub PR comments
|
||||
- **`notification_sound` config**: Enable/disable notification sounds (default: true)
|
||||
- **Prompt log viewer**: `tools/prompt-log-viewer.html` for visualizing prompt-response pairs during debugging
|
||||
- Auto-PR base branch now set to the current branch before branch creation
|
||||
|
||||
### Changed
|
||||
|
||||
- Unified planner and architect-planner: extracted design knowledge into knowledge facets, merged into planner. Removed architect movement from default/coding pieces (plan → implement direct transition)
|
||||
- Replaced readline with raw-mode line editor in interactive mode (cursor management, inter-line movement, Kitty keyboard protocol)
|
||||
- Unified interactive mode `save_task` with `takt add` worktree setup flow
|
||||
- Added `-d` flag to caffeinate to prevent App Nap process freezing during display sleep
|
||||
- Issue references now routed through interactive mode (previously executed directly, now used as initial input)
|
||||
- SDK update: `@anthropic-ai/claude-agent-sdk` v0.2.34 → v0.2.37
|
||||
- Enhanced interactive session scoring prompts with piece structure information
|
||||
|
||||
### Internal
|
||||
|
||||
- Extracted `resource-resolver.ts` for facet resolution logic (separated from `pieceParser.ts`)
|
||||
- Extracted `parallelExecution.ts` (worker pool), `resolveTask.ts` (task resolution), `sigintHandler.ts` (shared SIGINT handler)
|
||||
- Unified session key generation via `session-key.ts`
|
||||
- New `lineEditor.ts` (raw-mode terminal input, escape sequence parsing, cursor management)
|
||||
- Extensive test additions: catalog, facet-resolution, eject-facet, lineEditor, formatMovementPreviews, models, debug, strip-ansi, workerPool, runAllTasks-concurrency, session-key, interactive (major expansion), cli-routing-issue-resolve, parallel-logger, engine-parallel-failure, StreamDisplay, getCurrentBranch, globalConfig-defaults, pieceExecution-debug-prompts, selectAndExecute-autoPr, it-notification-sound, it-piece-loader, permission-mode (expansion)
|
||||
|
||||
## [0.8.0] - 2026-02-08
|
||||
|
||||
alpha.1 の内容を正式リリース。機能変更なし。
|
||||
|
||||
18
README.md
18
README.md
@ -262,6 +262,14 @@ takt clear
|
||||
# Deploy builtin pieces/personas as Claude Code Skill
|
||||
takt export-cc
|
||||
|
||||
# List available facets across layers
|
||||
takt catalog
|
||||
takt catalog personas
|
||||
|
||||
# Eject a specific facet for customization
|
||||
takt eject persona coder
|
||||
takt eject instruction plan --global
|
||||
|
||||
# Preview assembled prompts for each movement and phase
|
||||
takt prompt [piece]
|
||||
|
||||
@ -432,15 +440,16 @@ TAKT includes multiple builtin pieces:
|
||||
|
||||
| Piece | Description |
|
||||
|----------|-------------|
|
||||
| `default` | Full development piece: plan → architecture design → implement → AI review → parallel review (architect + security) → supervisor approval. Includes fix loops at each review stage. |
|
||||
| `default` | Full development piece: plan → implement → AI review → parallel review (architect + QA) → supervisor approval. Includes fix loops at each review stage. |
|
||||
| `minimal` | Quick piece: plan → implement → review → supervisor. Minimal steps for fast iteration. |
|
||||
| `review-fix-minimal` | Review-focused piece: review → fix → supervisor. For iterative improvement based on review feedback. |
|
||||
| `research` | Research piece: planner → digger → supervisor. Autonomously executes research without asking questions. |
|
||||
| `expert` | Full-stack development piece: architecture, frontend, security, QA reviews with fix loops. |
|
||||
| `expert-cqrs` | Full-stack development piece (CQRS+ES specialized): CQRS+ES, frontend, security, QA reviews with fix loops. |
|
||||
| `magi` | Deliberation system inspired by Evangelion. Three AI personas (MELCHIOR, BALTHASAR, CASPER) analyze and vote. |
|
||||
| `coding` | Lightweight development piece: architect-planner → implement → parallel review (AI antipattern + architecture) → fix. Fast feedback loop without supervisor. |
|
||||
| `coding` | Lightweight development piece: planner → implement → parallel review (AI antipattern + architecture) → fix. Fast feedback loop without supervisor. |
|
||||
| `passthrough` | Thinnest wrapper. Pass task directly to coder as-is. No review. |
|
||||
| `compound-eye` | Multi-model review: sends the same instruction to Claude and Codex simultaneously, then synthesizes both responses. |
|
||||
| `review-only` | Read-only code review piece that makes no changes. |
|
||||
|
||||
**Hybrid Codex variants** (`*-hybrid-codex`): Each major piece has a Codex variant where the coder agent runs on Codex while reviewers use Claude. Available for: default, minimal, expert, expert-cqrs, passthrough, review-fix-minimal, coding.
|
||||
@ -466,6 +475,7 @@ Use `takt switch` to switch pieces.
|
||||
| **research-planner** | Research task planning and scope definition |
|
||||
| **research-digger** | Deep investigation and information gathering |
|
||||
| **research-supervisor** | Research quality validation and completeness assessment |
|
||||
| **pr-commenter** | Posts review findings as GitHub PR comments |
|
||||
|
||||
## Custom Personas
|
||||
|
||||
@ -531,6 +541,9 @@ provider: claude # Default provider: claude or codex
|
||||
model: sonnet # Default model (optional)
|
||||
branch_name_strategy: romaji # Branch name generation: 'romaji' (fast) or 'ai' (slow)
|
||||
prevent_sleep: false # Prevent macOS idle sleep during execution (caffeinate)
|
||||
notification_sound: true # Enable/disable notification sounds
|
||||
concurrency: 1 # Parallel task count for takt run (1-10, default: 1 = sequential)
|
||||
interactive_preview_movements: 3 # Movement previews in interactive mode (0-10, default: 3)
|
||||
|
||||
# API Key configuration (optional)
|
||||
# Can be overridden by environment variables TAKT_ANTHROPIC_API_KEY / TAKT_OPENAI_API_KEY
|
||||
@ -746,6 +759,7 @@ Special `next` values: `COMPLETE` (success), `ABORT` (failure)
|
||||
| `permission_mode` | - | Permission mode: `readonly`, `edit`, `full` (provider-independent) |
|
||||
| `output_contracts` | - | Output contract definitions for report files |
|
||||
| `quality_gates` | - | AI directives for movement completion requirements |
|
||||
| `mcp_servers` | - | MCP (Model Context Protocol) server configuration (stdio/SSE/HTTP) |
|
||||
|
||||
## API Usage Example
|
||||
|
||||
|
||||
@ -37,6 +37,9 @@ provider: claude
|
||||
# {issue_body}
|
||||
# Closes #{issue}
|
||||
|
||||
# Notification sounds (true: enabled, false: disabled, default: true)
|
||||
# notification_sound: true
|
||||
|
||||
# Debug settings (optional)
|
||||
# debug:
|
||||
# enabled: false
|
||||
|
||||
@ -8,3 +8,9 @@ Review the code for AI-specific issues:
|
||||
- Plausible but incorrect patterns
|
||||
- Compatibility with the existing codebase
|
||||
- Scope creep detection
|
||||
|
||||
## Judgment Procedure
|
||||
|
||||
1. Review the change diff and detect issues based on the AI-specific criteria above
|
||||
2. For each detected issue, classify as blocking/non-blocking based on Policy's scope determination table and judgment rules
|
||||
3. If there is even one blocking issue, judge as REJECT
|
||||
|
||||
@ -1,9 +1,18 @@
|
||||
Analyze the task and formulate an implementation plan.
|
||||
Analyze the task and formulate an implementation plan including design decisions.
|
||||
|
||||
**Note:** If a Previous Response exists, this is a replan due to rejection.
|
||||
Revise the plan taking that feedback into account.
|
||||
|
||||
**Criteria for small tasks:**
|
||||
- Only 1-2 file changes
|
||||
- No design decisions needed
|
||||
- No technology selection needed
|
||||
|
||||
For small tasks, skip the design sections in the report.
|
||||
|
||||
**Actions:**
|
||||
1. Understand the task requirements
|
||||
2. Identify the impact area
|
||||
3. Decide on the implementation approach
|
||||
2. Investigate code to resolve unknowns
|
||||
3. Identify the impact area
|
||||
4. Determine file structure and design patterns (if needed)
|
||||
5. Decide on the implementation approach
|
||||
|
||||
@ -3,3 +3,9 @@ Review the code for AI-specific issues:
|
||||
- Plausible but incorrect patterns
|
||||
- Compatibility with the existing codebase
|
||||
- Scope creep detection
|
||||
|
||||
## Judgment Procedure
|
||||
|
||||
1. Review the change diff and detect issues based on the AI-specific criteria above
|
||||
2. For each detected issue, classify as blocking/non-blocking based on Policy's scope determination table and judgment rules
|
||||
3. If there is even one blocking issue, judge as REJECT
|
||||
|
||||
@ -8,3 +8,9 @@ Do not review AI-specific issues (already covered by the ai_review movement).
|
||||
- Test coverage
|
||||
- Dead code
|
||||
- Call chain verification
|
||||
|
||||
## Judgment Procedure
|
||||
|
||||
1. Review the change diff and detect issues based on the architecture and design criteria above
|
||||
2. For each detected issue, classify as blocking/non-blocking based on Policy's scope determination table and judgment rules
|
||||
3. If there is even one blocking issue, judge as REJECT
|
||||
|
||||
@ -10,3 +10,9 @@ AI-specific issue review is not needed (already covered by the ai_review movemen
|
||||
|
||||
**Note**: If this project does not use the CQRS+ES pattern,
|
||||
review from a general domain design perspective instead.
|
||||
|
||||
## Judgment Procedure
|
||||
|
||||
1. Review the change diff and detect issues based on the CQRS and Event Sourcing criteria above
|
||||
2. For each detected issue, classify as blocking/non-blocking based on Policy's scope determination table and judgment rules
|
||||
3. If there is even one blocking issue, judge as REJECT
|
||||
|
||||
@ -10,3 +10,9 @@ Review the changes from a frontend development perspective.
|
||||
|
||||
**Note**: If this project does not include a frontend,
|
||||
proceed as no issues found.
|
||||
|
||||
## Judgment Procedure
|
||||
|
||||
1. Review the change diff and detect issues based on the frontend development criteria above
|
||||
2. For each detected issue, classify as blocking/non-blocking based on Policy's scope determination table and judgment rules
|
||||
3. If there is even one blocking issue, judge as REJECT
|
||||
|
||||
@ -6,3 +6,9 @@ Review the changes from a quality assurance perspective.
|
||||
- Error handling
|
||||
- Logging and monitoring
|
||||
- Maintainability
|
||||
|
||||
## Judgment Procedure
|
||||
|
||||
1. Review the change diff and detect issues based on the quality assurance criteria above
|
||||
2. For each detected issue, classify as blocking/non-blocking based on Policy's scope determination table and judgment rules
|
||||
3. If there is even one blocking issue, judge as REJECT
|
||||
|
||||
@ -3,3 +3,9 @@ Review the changes from a security perspective. Check for the following vulnerab
|
||||
- Authentication and authorization flaws
|
||||
- Data exposure risks
|
||||
- Cryptographic weaknesses
|
||||
|
||||
## Judgment Procedure
|
||||
|
||||
1. Review the change diff and detect issues based on the security criteria above
|
||||
2. For each detected issue, classify as blocking/non-blocking based on Policy's scope determination table and judgment rules
|
||||
3. If there is even one blocking issue, judge as REJECT
|
||||
|
||||
@ -12,9 +12,22 @@
|
||||
### Scope
|
||||
{Impact area}
|
||||
|
||||
### Design Decisions (only when design is needed)
|
||||
|
||||
#### File Structure
|
||||
| File | Role |
|
||||
|------|------|
|
||||
| `src/example.ts` | Overview |
|
||||
|
||||
#### Design Patterns
|
||||
- {Adopted patterns and where they apply}
|
||||
|
||||
### Implementation Approach
|
||||
{How to proceed}
|
||||
|
||||
## Implementation Guidelines (only when design is needed)
|
||||
- {Guidelines the Coder should follow during implementation}
|
||||
|
||||
## Open Questions (if any)
|
||||
- {Unclear points or items that need confirmation}
|
||||
```
|
||||
|
||||
@ -38,6 +38,7 @@ Judge from a big-picture perspective to avoid "missing the forest for the trees.
|
||||
| Contradictions | Are there conflicting findings between experts? |
|
||||
| Gaps | Are there areas not covered by any expert? |
|
||||
| Duplicates | Is the same issue raised from different perspectives? |
|
||||
| Non-blocking validity | Are items classified as "non-blocking" or "existing problems" by reviewers truly issues in files not targeted by the change? |
|
||||
|
||||
### 2. Alignment with Original Requirements
|
||||
|
||||
@ -86,7 +87,7 @@ Judge from a big-picture perspective to avoid "missing the forest for the trees.
|
||||
|
||||
When all of the following are met:
|
||||
|
||||
1. All expert reviews are APPROVE, or only minor findings
|
||||
1. All expert reviews are APPROVE
|
||||
2. Original requirements are met
|
||||
3. No critical risks
|
||||
4. Overall consistency is maintained
|
||||
@ -100,16 +101,6 @@ When any of the following apply:
|
||||
3. Critical risks exist
|
||||
4. Significant contradictions in review results
|
||||
|
||||
### Conditional APPROVE
|
||||
|
||||
May approve conditionally when:
|
||||
|
||||
1. Only minor issues that can be addressed as follow-up tasks
|
||||
2. Recorded as technical debt with planned remediation
|
||||
3. Urgent release needed for business reasons
|
||||
|
||||
**However, the Boy Scout Rule applies.** Never defer fixes that cost seconds to minutes (redundant code removal, unnecessary expression simplification, etc.) via "conditional APPROVE." If the fix is near-zero cost, make the coder fix it now before approving.
|
||||
|
||||
## Communication Style
|
||||
|
||||
- Fair and objective
|
||||
@ -124,3 +115,4 @@ May approve conditionally when:
|
||||
- **Stop loops**: Suggest design revision for 3+ iterations
|
||||
- **Don't forget business value**: Value delivery over technical perfection
|
||||
- **Consider context**: Judge according to project situation
|
||||
- **Verify non-blocking classifications**: Always verify issues classified as "non-blocking," "existing problems," or "informational" by reviewers. If an issue in a changed file was marked as non-blocking, escalate it to blocking and REJECT
|
||||
|
||||
@ -1,17 +1,18 @@
|
||||
# Planner Agent
|
||||
|
||||
You are a **task analysis expert**. You analyze user requests and create implementation plans.
|
||||
You are a **task analysis and design planning specialist**. You analyze user requirements, investigate code to resolve unknowns, and create structurally sound implementation plans.
|
||||
|
||||
## Role
|
||||
|
||||
- Analyze and understand user requests
|
||||
- Analyze and understand user requirements
|
||||
- Resolve unknowns by reading code yourself
|
||||
- Identify impact scope
|
||||
- Formulate implementation approach
|
||||
- Determine file structure and design patterns
|
||||
- Create implementation guidelines for Coder
|
||||
|
||||
**Don't:**
|
||||
- Implement code (Coder's job)
|
||||
- Make design decisions (Architect's job)
|
||||
- Review code
|
||||
**Not your job:**
|
||||
- Writing code (Coder's job)
|
||||
- Code review (Reviewer's job)
|
||||
|
||||
## Analysis Phases
|
||||
|
||||
@ -25,26 +26,27 @@ Analyze user request and identify:
|
||||
| Scope | What areas are affected? |
|
||||
| Deliverables | What should be created? |
|
||||
|
||||
### 2. Impact Scope Identification
|
||||
### 2. Investigating and Resolving Unknowns
|
||||
|
||||
Identify the scope of changes:
|
||||
|
||||
- Files/modules that need modification
|
||||
- Dependencies
|
||||
- Impact on tests
|
||||
|
||||
### 3. Fact-Checking (Source of Truth Verification)
|
||||
|
||||
Always verify information used in your analysis against the source of truth:
|
||||
When the task has unknowns or Open Questions, resolve them by reading code instead of guessing.
|
||||
|
||||
| Information Type | Source of Truth |
|
||||
|-----------------|-----------------|
|
||||
| Code behavior | Actual source code |
|
||||
| Config values / names | Actual config files / definition files |
|
||||
| APIs / commands | Actual implementation code |
|
||||
| Documentation claims | Cross-check with actual codebase |
|
||||
| Data structures / types | Type definition files / schemas |
|
||||
|
||||
**Don't guess.** Always verify names, values, and behaviors against actual code.
|
||||
**Don't guess.** Verify names, values, and behavior in the code.
|
||||
**Don't stop at "unknown."** If the code can tell you, investigate and resolve it.
|
||||
|
||||
### 3. Impact Scope Identification
|
||||
|
||||
Identify the scope of changes:
|
||||
|
||||
- Files/modules that need modification
|
||||
- Dependencies (callers and callees)
|
||||
- Impact on tests
|
||||
|
||||
### 4. Spec & Constraint Verification
|
||||
|
||||
@ -59,19 +61,42 @@ Always verify information used in your analysis against the source of truth:
|
||||
|
||||
**Don't plan against the specs.** If specs are unclear, explicitly state so.
|
||||
|
||||
### 5. Implementation Approach
|
||||
### 5. Structural Design
|
||||
|
||||
Determine the implementation direction:
|
||||
Always choose the optimal structure. Do not follow poor existing code structure.
|
||||
|
||||
**File Organization:**
|
||||
- 1 module, 1 responsibility
|
||||
- File splitting follows de facto standards of the programming language
|
||||
- Target 200-400 lines per file. If exceeding, include splitting in the plan
|
||||
- If existing code has structural problems, include refactoring within the task scope
|
||||
|
||||
**Module Design:**
|
||||
- High cohesion, low coupling
|
||||
- Maintain dependency direction (upper layers → lower layers)
|
||||
- No circular dependencies
|
||||
- Separation of concerns (reads vs. writes, business logic vs. IO)
|
||||
|
||||
### 6. Implementation Approach
|
||||
|
||||
Based on investigation and design, determine the implementation direction:
|
||||
|
||||
- What steps to follow
|
||||
- File organization (list of files to create/modify)
|
||||
- Points to be careful about
|
||||
- Items requiring confirmation
|
||||
- **Spec constraints** (schemas, formats, ignored fields, etc.)
|
||||
- Spec constraints
|
||||
|
||||
## Important
|
||||
## Design Principles
|
||||
|
||||
**Do not include backward compatibility code in plans.** Unless explicitly instructed, fallbacks, re-exports, and migration code are unnecessary.
|
||||
**Keep analysis simple.** Overly detailed plans are unnecessary. Provide enough direction for Coder to proceed with implementation.
|
||||
**Backward Compatibility:**
|
||||
- Do not include backward compatibility code unless explicitly instructed
|
||||
- Plan to delete things that are unused
|
||||
|
||||
**Make unclear points explicit.** Don't proceed with guesses, report unclear points.
|
||||
**Don't Generate Unnecessary Code:**
|
||||
- Don't plan "just in case" code, future fields, or unused methods
|
||||
- Don't plan to leave TODO comments. Either do it now, or don't
|
||||
|
||||
**Important:**
|
||||
**Investigate before planning.** Don't plan without reading existing code.
|
||||
**Design simply.** No excessive abstractions or future-proofing. Provide enough direction for Coder to implement without hesitation.
|
||||
**Ask all clarification questions at once.** Do not ask follow-up questions in multiple rounds.
|
||||
|
||||
@ -5,9 +5,11 @@ piece_categories:
|
||||
- passthrough
|
||||
- coding
|
||||
- minimal
|
||||
🔍 Review & Fix:
|
||||
- compound-eye
|
||||
🔍 Review:
|
||||
pieces:
|
||||
- review-fix-minimal
|
||||
- review-only
|
||||
🎨 Frontend: {}
|
||||
⚙️ Backend: {}
|
||||
🔧 Expert:
|
||||
@ -33,6 +35,5 @@ piece_categories:
|
||||
pieces:
|
||||
- research
|
||||
- magi
|
||||
- review-only
|
||||
show_others_category: true
|
||||
others_category_name: Others
|
||||
|
||||
@ -7,7 +7,7 @@ max_iterations: 20
|
||||
knowledge:
|
||||
architecture: ../knowledge/architecture.md
|
||||
personas:
|
||||
architect-planner: ../personas/architect-planner.md
|
||||
planner: ../personas/planner.md
|
||||
coder: ../personas/coder.md
|
||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
||||
@ -25,7 +25,8 @@ initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
edit: false
|
||||
persona: architect-planner
|
||||
persona: planner
|
||||
knowledge: architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
|
||||
@ -9,7 +9,7 @@ policies:
|
||||
knowledge:
|
||||
architecture: ../knowledge/architecture.md
|
||||
personas:
|
||||
architect-planner: ../personas/architect-planner.md
|
||||
planner: ../personas/planner.md
|
||||
coder: ../personas/coder.md
|
||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
||||
@ -23,7 +23,8 @@ initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
edit: false
|
||||
persona: architect-planner
|
||||
persona: planner
|
||||
knowledge: architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
@ -43,6 +44,7 @@ movements:
|
||||
report:
|
||||
- name: 00-plan.md
|
||||
format: plan
|
||||
|
||||
- name: implement
|
||||
edit: true
|
||||
persona: coder
|
||||
@ -77,6 +79,7 @@ movements:
|
||||
report:
|
||||
- Scope: 02-coder-scope.md
|
||||
- Decisions: 03-coder-decisions.md
|
||||
|
||||
- name: reviewers
|
||||
parallel:
|
||||
- name: ai_review
|
||||
@ -123,6 +126,7 @@ movements:
|
||||
next: COMPLETE
|
||||
- condition: any("AI-specific issues found", "needs_fix")
|
||||
next: fix
|
||||
|
||||
- name: fix
|
||||
edit: true
|
||||
persona: coder
|
||||
|
||||
110
builtins/en/pieces/compound-eye.yaml
Normal file
110
builtins/en/pieces/compound-eye.yaml
Normal file
@ -0,0 +1,110 @@
|
||||
name: compound-eye
|
||||
description: Multi-model review - send the same instruction to Claude and Codex simultaneously, synthesize both responses
|
||||
max_iterations: 10
|
||||
knowledge:
|
||||
architecture: ../knowledge/architecture.md
|
||||
personas:
|
||||
coder: ../personas/coder.md
|
||||
supervisor: ../personas/supervisor.md
|
||||
initial_movement: evaluate
|
||||
movements:
|
||||
- name: evaluate
|
||||
parallel:
|
||||
- name: claude-eye
|
||||
edit: false
|
||||
persona: coder
|
||||
provider: claude
|
||||
session: refresh
|
||||
knowledge: architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- Bash
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: done
|
||||
- condition: failed
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 01-claude.md
|
||||
- name: codex-eye
|
||||
edit: false
|
||||
persona: coder
|
||||
provider: codex
|
||||
session: refresh
|
||||
knowledge: architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- Bash
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: done
|
||||
- condition: failed
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 02-codex.md
|
||||
rules:
|
||||
- condition: any("done")
|
||||
next: synthesize
|
||||
|
||||
- name: synthesize
|
||||
edit: false
|
||||
persona: supervisor
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
rules:
|
||||
- condition: synthesis complete
|
||||
next: COMPLETE
|
||||
instruction_template: |
|
||||
Two models (Claude / Codex) independently answered the same instruction.
|
||||
Synthesize their responses.
|
||||
|
||||
**Tasks:**
|
||||
1. Read reports in the Report Directory
|
||||
- `01-claude.md` (Claude's response)
|
||||
- `02-codex.md` (Codex's response)
|
||||
Note: If one report is missing (model failed), synthesize from the available report only
|
||||
2. If both reports exist, compare and clarify:
|
||||
- Points of agreement
|
||||
- Points of disagreement
|
||||
- Points mentioned by only one model
|
||||
3. Produce a synthesized conclusion
|
||||
|
||||
**Output format:**
|
||||
```markdown
|
||||
# Multi-Model Review Synthesis
|
||||
|
||||
## Conclusion
|
||||
{Synthesized conclusion}
|
||||
|
||||
## Response Status
|
||||
| Model | Status |
|
||||
|-------|--------|
|
||||
| Claude | ✅ / ❌ |
|
||||
| Codex | ✅ / ❌ |
|
||||
|
||||
## Agreements
|
||||
- {Points where both models agree}
|
||||
|
||||
## Disagreements
|
||||
| Topic | Claude | Codex |
|
||||
|-------|--------|-------|
|
||||
| {topic} | {Claude's view} | {Codex's view} |
|
||||
|
||||
## Unique Findings
|
||||
- **Claude only:** {Points only Claude mentioned}
|
||||
- **Codex only:** {Points only Codex mentioned}
|
||||
|
||||
## Overall Assessment
|
||||
{Overall assessment considering both responses}
|
||||
```
|
||||
output_contracts:
|
||||
report:
|
||||
- Summary: 03-synthesis.md
|
||||
@ -9,7 +9,6 @@ knowledge:
|
||||
architecture: ../knowledge/architecture.md
|
||||
personas:
|
||||
planner: ../personas/planner.md
|
||||
architect-planner: ../personas/architect-planner.md
|
||||
coder: ../personas/coder.md
|
||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
||||
@ -17,7 +16,6 @@ personas:
|
||||
supervisor: ../personas/supervisor.md
|
||||
instructions:
|
||||
plan: ../instructions/plan.md
|
||||
architect: ../instructions/architect.md
|
||||
implement: ../instructions/implement.md
|
||||
ai-review: ../instructions/ai-review.md
|
||||
ai-fix: ../instructions/ai-fix.md
|
||||
@ -28,7 +26,6 @@ instructions:
|
||||
supervise: ../instructions/supervise.md
|
||||
report_formats:
|
||||
plan: ../output-contracts/plan.md
|
||||
architecture-design: ../output-contracts/architecture-design.md
|
||||
ai-review: ../output-contracts/ai-review.md
|
||||
architecture-review: ../output-contracts/architecture-review.md
|
||||
qa-review: ../output-contracts/qa-review.md
|
||||
@ -64,6 +61,7 @@ movements:
|
||||
- name: plan
|
||||
edit: false
|
||||
persona: planner
|
||||
knowledge: architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
@ -73,7 +71,7 @@ movements:
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: Requirements are clear and implementable
|
||||
next: architect
|
||||
next: implement
|
||||
- condition: User is asking a question (not an implementation task)
|
||||
next: COMPLETE
|
||||
- condition: Requirements unclear, insufficient info
|
||||
@ -87,27 +85,6 @@ movements:
|
||||
report:
|
||||
- name: 00-plan.md
|
||||
format: plan
|
||||
- name: architect
|
||||
edit: false
|
||||
persona: architect-planner
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: Small task (no design needed)
|
||||
next: implement
|
||||
- condition: Design complete
|
||||
next: implement
|
||||
- condition: Insufficient info, cannot proceed
|
||||
next: ABORT
|
||||
instruction: architect
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 01-architecture.md
|
||||
format: architecture-design
|
||||
- name: implement
|
||||
edit: true
|
||||
persona: coder
|
||||
|
||||
@ -12,7 +12,6 @@ knowledge:
|
||||
architecture: ../knowledge/architecture.md
|
||||
personas:
|
||||
planner: ../personas/planner.md
|
||||
architect-planner: ../personas/architect-planner.md
|
||||
coder: ../personas/coder.md
|
||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
||||
@ -20,7 +19,6 @@ personas:
|
||||
supervisor: ../personas/supervisor.md
|
||||
instructions:
|
||||
plan: ../instructions/plan.md
|
||||
architect: ../instructions/architect.md
|
||||
implement: ../instructions/implement.md
|
||||
ai-review: ../instructions/ai-review.md
|
||||
ai-fix: ../instructions/ai-fix.md
|
||||
@ -59,6 +57,7 @@ movements:
|
||||
- name: plan
|
||||
edit: false
|
||||
persona: planner
|
||||
knowledge: architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
@ -68,7 +67,7 @@ movements:
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: Requirements are clear and implementable
|
||||
next: architect
|
||||
next: implement
|
||||
- condition: User is asking a question (not an implementation task)
|
||||
next: COMPLETE
|
||||
- condition: Requirements unclear, insufficient info
|
||||
@ -82,27 +81,7 @@ movements:
|
||||
report:
|
||||
- name: 00-plan.md
|
||||
format: plan
|
||||
- name: architect
|
||||
edit: false
|
||||
persona: architect-planner
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: Small task (no design needed)
|
||||
next: implement
|
||||
- condition: Design complete
|
||||
next: implement
|
||||
- condition: Insufficient info, cannot proceed
|
||||
next: ABORT
|
||||
instruction: architect
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 01-architecture.md
|
||||
format: architecture-design
|
||||
|
||||
- name: implement
|
||||
edit: true
|
||||
persona: coder
|
||||
@ -139,6 +118,7 @@ movements:
|
||||
report:
|
||||
- Scope: 02-coder-scope.md
|
||||
- Decisions: 03-coder-decisions.md
|
||||
|
||||
- name: ai_review
|
||||
edit: false
|
||||
persona: ai-antipattern-reviewer
|
||||
@ -161,6 +141,7 @@ movements:
|
||||
report:
|
||||
- name: 04-ai-review.md
|
||||
format: ai-review
|
||||
|
||||
- name: ai_fix
|
||||
edit: true
|
||||
persona: coder
|
||||
@ -189,6 +170,7 @@ movements:
|
||||
- condition: Cannot proceed, insufficient info
|
||||
next: ai_no_fix
|
||||
instruction: ai-fix
|
||||
|
||||
- name: ai_no_fix
|
||||
edit: false
|
||||
persona: architecture-reviewer
|
||||
@ -203,6 +185,7 @@ movements:
|
||||
- condition: ai_fix's judgment is valid (no fix needed)
|
||||
next: reviewers
|
||||
instruction: arbitrate
|
||||
|
||||
- name: reviewers
|
||||
parallel:
|
||||
- name: arch-review
|
||||
@ -251,6 +234,7 @@ movements:
|
||||
next: supervise
|
||||
- condition: any("needs_fix")
|
||||
next: fix
|
||||
|
||||
- name: fix
|
||||
edit: true
|
||||
persona: coder
|
||||
@ -276,6 +260,7 @@ movements:
|
||||
- condition: Cannot proceed, insufficient info
|
||||
next: plan
|
||||
instruction: fix
|
||||
|
||||
- name: supervise
|
||||
edit: false
|
||||
persona: supervisor
|
||||
@ -299,7 +284,6 @@ movements:
|
||||
- Summary: summary.md
|
||||
report_formats:
|
||||
plan: ../output-contracts/plan.md
|
||||
architecture-design: ../output-contracts/architecture-design.md
|
||||
ai-review: ../output-contracts/ai-review.md
|
||||
architecture-review: ../output-contracts/architecture-review.md
|
||||
qa-review: ../output-contracts/qa-review.md
|
||||
|
||||
@ -17,6 +17,7 @@ Define the shared judgment criteria and behavioral principles for all reviewers.
|
||||
| Situation | Verdict | Action |
|
||||
|-----------|---------|--------|
|
||||
| Problem introduced by this change | Blocking | REJECT |
|
||||
| Code made unused by this change (arguments, imports, variables, functions) | Blocking | REJECT (change-induced problem) |
|
||||
| Existing problem in a changed file | Blocking | REJECT (Boy Scout rule) |
|
||||
| Structural problem in the changed module | Blocking | REJECT if within scope |
|
||||
| Problem in an unchanged file | Non-blocking | Record only (informational) |
|
||||
@ -107,10 +108,18 @@ Leave it better than you found it.
|
||||
| Redundant expression (a shorter equivalent exists) | REJECT |
|
||||
| Unnecessary branch/condition (unreachable or always the same result) | REJECT |
|
||||
| Fixable in seconds to minutes | REJECT (do not mark as "non-blocking") |
|
||||
| Code made unused as a result of the change (arguments, imports, etc.) | REJECT — change-induced, not an "existing problem" |
|
||||
| Fix requires refactoring (large scope) | Record only (technical debt) |
|
||||
|
||||
Do not tolerate problems just because existing code does the same. If existing code is bad, improve it rather than match it.
|
||||
|
||||
## Judgment Rules
|
||||
|
||||
- All issues detected in changed files are blocking (REJECT targets), even if the code existed before the change
|
||||
- Only issues in files NOT targeted by the change may be classified as "existing problems" or "non-blocking"
|
||||
- "The code itself existed before" is not a valid reason for non-blocking. As long as it is in a changed file, the Boy Scout rule applies
|
||||
- If even one issue exists, REJECT. "APPROVE with warnings" or "APPROVE with suggestions" is prohibited
|
||||
|
||||
## Detecting Circular Arguments
|
||||
|
||||
When the same kind of issue keeps recurring, reconsider the approach itself rather than repeating the same fix instructions.
|
||||
|
||||
@ -37,6 +37,9 @@ provider: claude
|
||||
# {issue_body}
|
||||
# Closes #{issue}
|
||||
|
||||
# 通知音 (true: 有効 / false: 無効、デフォルト: true)
|
||||
# notification_sound: true
|
||||
|
||||
# デバッグ設定 (オプション)
|
||||
# debug:
|
||||
# enabled: false
|
||||
|
||||
@ -8,3 +8,9 @@ AI特有の問題についてコードをレビューしてください:
|
||||
- もっともらしいが間違っているパターン
|
||||
- 既存コードベースとの適合性
|
||||
- スコープクリープの検出
|
||||
|
||||
## 判定手順
|
||||
|
||||
1. 変更差分を確認し、AI特有の問題の観点に基づいて問題を検出する
|
||||
2. 検出した問題ごとに、Policyのスコープ判定表と判定ルールに基づいてブロッキング/非ブロッキングを分類する
|
||||
3. ブロッキング問題が1件でもあればREJECTと判定する
|
||||
|
||||
@ -1,9 +1,18 @@
|
||||
タスクを分析し、実装方針を立ててください。
|
||||
タスクを分析し、設計を含めた実装方針を立ててください。
|
||||
|
||||
**注意:** Previous Responseがある場合は差し戻しのため、
|
||||
その内容を踏まえて計画を見直してください(replan)。
|
||||
|
||||
**小規模タスクの判断基準:**
|
||||
- 1-2ファイルの変更のみ
|
||||
- 設計判断が不要
|
||||
- 技術選定が不要
|
||||
|
||||
小規模タスクの場合は設計セクションを省略してください。
|
||||
|
||||
**やること:**
|
||||
1. タスクの要件を理解する
|
||||
2. 影響範囲を特定する
|
||||
3. 実装アプローチを決める
|
||||
2. コードを調査して不明点を解決する
|
||||
3. 影響範囲を特定する
|
||||
4. ファイル構成・設計パターンを決定する(必要な場合)
|
||||
5. 実装アプローチを決める
|
||||
|
||||
@ -3,3 +3,9 @@ AI特有の問題についてコードをレビューしてください:
|
||||
- もっともらしいが間違っているパターン
|
||||
- 既存コードベースとの適合性
|
||||
- スコープクリープの検出
|
||||
|
||||
## 判定手順
|
||||
|
||||
1. 変更差分を確認し、AI特有の問題の観点に基づいて問題を検出する
|
||||
2. 検出した問題ごとに、Policyのスコープ判定表と判定ルールに基づいてブロッキング/非ブロッキングを分類する
|
||||
3. ブロッキング問題が1件でもあればREJECTと判定する
|
||||
|
||||
@ -8,3 +8,9 @@ AI特有の問題はレビューしないでください(ai_reviewムーブメ
|
||||
- テストカバレッジ
|
||||
- デッドコード
|
||||
- 呼び出しチェーン検証
|
||||
|
||||
## 判定手順
|
||||
|
||||
1. 変更差分を確認し、構造・設計の観点に基づいて問題を検出する
|
||||
2. 検出した問題ごとに、Policyのスコープ判定表と判定ルールに基づいてブロッキング/非ブロッキングを分類する
|
||||
3. ブロッキング問題が1件でもあればREJECTと判定する
|
||||
|
||||
@ -10,3 +10,9 @@ CQRS(コマンドクエリ責務分離)とEvent Sourcing(イベントソ
|
||||
|
||||
**注意**: このプロジェクトがCQRS+ESパターンを使用していない場合は、
|
||||
一般的なドメイン設計の観点からレビューしてください。
|
||||
|
||||
## 判定手順
|
||||
|
||||
1. 変更差分を確認し、CQRS・イベントソーシングの観点に基づいて問題を検出する
|
||||
2. 検出した問題ごとに、Policyのスコープ判定表と判定ルールに基づいてブロッキング/非ブロッキングを分類する
|
||||
3. ブロッキング問題が1件でもあればREJECTと判定する
|
||||
|
||||
@ -10,3 +10,9 @@
|
||||
|
||||
**注意**: このプロジェクトがフロントエンドを含まない場合は、
|
||||
問題なしとして次に進んでください。
|
||||
|
||||
## 判定手順
|
||||
|
||||
1. 変更差分を確認し、フロントエンド開発の観点に基づいて問題を検出する
|
||||
2. 検出した問題ごとに、Policyのスコープ判定表と判定ルールに基づいてブロッキング/非ブロッキングを分類する
|
||||
3. ブロッキング問題が1件でもあればREJECTと判定する
|
||||
|
||||
@ -6,3 +6,9 @@
|
||||
- エラーハンドリング
|
||||
- ログとモニタリング
|
||||
- 保守性
|
||||
|
||||
## 判定手順
|
||||
|
||||
1. 変更差分を確認し、品質保証の観点に基づいて問題を検出する
|
||||
2. 検出した問題ごとに、Policyのスコープ判定表と判定ルールに基づいてブロッキング/非ブロッキングを分類する
|
||||
3. ブロッキング問題が1件でもあればREJECTと判定する
|
||||
|
||||
@ -3,3 +3,9 @@
|
||||
- 認証・認可の不備
|
||||
- データ露出リスク
|
||||
- 暗号化の弱点
|
||||
|
||||
## 判定手順
|
||||
|
||||
1. 変更差分を確認し、セキュリティの観点に基づいて問題を検出する
|
||||
2. 検出した問題ごとに、Policyのスコープ判定表と判定ルールに基づいてブロッキング/非ブロッキングを分類する
|
||||
3. ブロッキング問題が1件でもあればREJECTと判定する
|
||||
|
||||
@ -12,9 +12,22 @@
|
||||
### スコープ
|
||||
{影響範囲}
|
||||
|
||||
### 設計判断(設計が必要な場合のみ)
|
||||
|
||||
#### ファイル構成
|
||||
| ファイル | 役割 |
|
||||
|---------|------|
|
||||
| `src/example.ts` | 概要 |
|
||||
|
||||
#### 設計パターン
|
||||
- {採用するパターンと適用箇所}
|
||||
|
||||
### 実装アプローチ
|
||||
{どう進めるか}
|
||||
|
||||
## 実装ガイドライン(設計が必要な場合のみ)
|
||||
- {Coderが実装時に従うべき指針}
|
||||
|
||||
## 確認事項(あれば)
|
||||
- {不明点や確認が必要な点}
|
||||
```
|
||||
|
||||
@ -22,6 +22,7 @@
|
||||
- 堂々巡りを検出したら、3回以上のループで設計見直しを提案する
|
||||
- ビジネス価値を忘れない。技術的完璧さより価値の提供を重視する
|
||||
- 優先度を明確に示す。何から手をつけるべきかを伝える
|
||||
- レビュアーが「非ブロッキング」「既存問題」「参考情報」に分類した問題を必ず検証する。変更対象ファイル内の問題が非ブロッキングにされていた場合、ブロッキングに格上げしてREJECTとする
|
||||
|
||||
## ドメイン知識
|
||||
|
||||
@ -32,6 +33,7 @@
|
||||
| 矛盾 | 専門家間で矛盾する指摘がないか |
|
||||
| 漏れ | どの専門家もカバーしていない領域がないか |
|
||||
| 重複 | 同じ問題が異なる観点から指摘されていないか |
|
||||
| 非ブロッキング判定の妥当性 | 各レビュアーが「非ブロッキング」「既存問題」に分類した項目が、本当に変更対象外ファイルの問題か |
|
||||
|
||||
### 元の要求との整合
|
||||
|
||||
@ -52,7 +54,7 @@
|
||||
### 判定基準
|
||||
|
||||
**APPROVEの条件(すべて満たす):**
|
||||
- すべての専門家レビューがAPPROVE、または軽微な指摘のみ
|
||||
- すべての専門家レビューがAPPROVEである
|
||||
- 元の要求を満たしている
|
||||
- 重大なリスクがない
|
||||
- 全体として整合性が取れている
|
||||
@ -63,10 +65,6 @@
|
||||
- 重大なリスクがある
|
||||
- レビュー結果に重大な矛盾がある
|
||||
|
||||
**条件付きAPPROVE:**
|
||||
- 軽微な問題のみで、後続タスクとして対応可能な場合
|
||||
- ただし、修正コストが数秒〜数分の指摘は先送りにせず、今回のタスクで修正させる(ボーイスカウトルール)
|
||||
|
||||
### 堂々巡りの検出
|
||||
|
||||
| 状況 | 対応 |
|
||||
|
||||
@ -1,24 +1,25 @@
|
||||
# Planner
|
||||
|
||||
あなたはタスク分析の専門家です。ユーザー要求を分析し、実装方針を立てます。
|
||||
あなたはタスク分析と設計計画の専門家です。ユーザー要求を分析し、コードを調査して不明点を解決し、構造を意識した実装方針を立てます。
|
||||
|
||||
## 役割の境界
|
||||
|
||||
**やること:**
|
||||
- ユーザー要求の分析・理解
|
||||
- コードを読んで不明点を自力で解決する
|
||||
- 影響範囲の特定
|
||||
- 実装アプローチの策定
|
||||
- ファイル構成・設計パターンの決定
|
||||
- Coder への実装ガイドライン作成
|
||||
|
||||
**やらないこと:**
|
||||
- コードの実装(Coder の仕事)
|
||||
- 設計判断(Architect の仕事)
|
||||
- コードレビュー
|
||||
- コードレビュー(Reviewer の仕事)
|
||||
|
||||
## 行動姿勢
|
||||
|
||||
- 推測で書かない。名前・値・振る舞いは必ずコードで確認する
|
||||
- シンプルに分析する。過度に詳細な計画は不要
|
||||
- 不明点は明確にする。推測で進めない
|
||||
- 調査してから計画する。既存コードを読まずに計画を立てない
|
||||
- 推測で書かない。名前・値・振る舞いは必ずコードで確認する。「不明」で止まらない
|
||||
- シンプルに設計する。過度な抽象化や将来への備えは不要
|
||||
- 確認が必要な場合は質問を一度にまとめる。追加の確認質問を繰り返さない
|
||||
- 後方互換コードは計画に含めない。明示的な指示がない限り不要
|
||||
|
||||
@ -33,4 +34,26 @@
|
||||
| コードの振る舞い | 実際のソースコード |
|
||||
| 設定値・名前 | 実際の設定ファイル・定義ファイル |
|
||||
| API・コマンド | 実際の実装コード |
|
||||
| ドキュメント記述 | 実際のコードベースと突合 |
|
||||
| データ構造・型 | 型定義ファイル・スキーマ |
|
||||
|
||||
### 構造設計
|
||||
|
||||
常に最適な構造を選択する。既存コードが悪い構造でも踏襲しない。
|
||||
|
||||
**ファイル構成:**
|
||||
- 1 モジュール 1 責務
|
||||
- ファイル分割はプログラミング言語のデファクトスタンダードに従う
|
||||
- 1 ファイル 200-400 行を目安。超える場合は分割を計画に含める
|
||||
- 既存コードに構造上の問題があれば、タスクスコープ内でリファクタリングを計画に含める
|
||||
|
||||
**モジュール設計:**
|
||||
- 高凝集・低結合
|
||||
- 依存の方向を守る(上位層 → 下位層)
|
||||
- 循環依存を作らない
|
||||
- 責務の分離(読み取りと書き込み、ビジネスロジックと IO)
|
||||
|
||||
### 計画の原則
|
||||
|
||||
- 後方互換コードは計画に含めない(明示的な指示がない限り不要)
|
||||
- 使われていないものは削除する計画を立てる
|
||||
- TODO コメントで済ませる計画は立てない。今やるか、やらないか
|
||||
|
||||
@ -5,9 +5,11 @@ piece_categories:
|
||||
- passthrough
|
||||
- coding
|
||||
- minimal
|
||||
🔍 レビュー&修正:
|
||||
- compound-eye
|
||||
🔍 レビュー:
|
||||
pieces:
|
||||
- review-fix-minimal
|
||||
- review-only
|
||||
🎨 フロントエンド: {}
|
||||
⚙️ バックエンド: {}
|
||||
🔧 フルスタック:
|
||||
@ -32,6 +34,5 @@ piece_categories:
|
||||
pieces:
|
||||
- research
|
||||
- magi
|
||||
- review-only
|
||||
show_others_category: true
|
||||
others_category_name: その他
|
||||
|
||||
@ -7,7 +7,7 @@ max_iterations: 20
|
||||
knowledge:
|
||||
architecture: ../knowledge/architecture.md
|
||||
personas:
|
||||
architect-planner: ../personas/architect-planner.md
|
||||
planner: ../personas/planner.md
|
||||
coder: ../personas/coder.md
|
||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
||||
@ -25,7 +25,8 @@ initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
edit: false
|
||||
persona: architect-planner
|
||||
persona: planner
|
||||
knowledge: architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
|
||||
@ -9,7 +9,7 @@ policies:
|
||||
knowledge:
|
||||
architecture: ../knowledge/architecture.md
|
||||
personas:
|
||||
architect-planner: ../personas/architect-planner.md
|
||||
planner: ../personas/planner.md
|
||||
coder: ../personas/coder.md
|
||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
||||
@ -23,7 +23,8 @@ initial_movement: plan
|
||||
movements:
|
||||
- name: plan
|
||||
edit: false
|
||||
persona: architect-planner
|
||||
persona: planner
|
||||
knowledge: architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
@ -43,6 +44,7 @@ movements:
|
||||
report:
|
||||
- name: 00-plan.md
|
||||
format: plan
|
||||
|
||||
- name: implement
|
||||
edit: true
|
||||
persona: coder
|
||||
@ -77,6 +79,7 @@ movements:
|
||||
report:
|
||||
- Scope: 02-coder-scope.md
|
||||
- Decisions: 03-coder-decisions.md
|
||||
|
||||
- name: reviewers
|
||||
parallel:
|
||||
- name: ai_review
|
||||
@ -123,6 +126,7 @@ movements:
|
||||
next: COMPLETE
|
||||
- condition: any("AI特有の問題あり", "needs_fix")
|
||||
next: fix
|
||||
|
||||
- name: fix
|
||||
edit: true
|
||||
persona: coder
|
||||
|
||||
110
builtins/ja/pieces/compound-eye.yaml
Normal file
110
builtins/ja/pieces/compound-eye.yaml
Normal file
@ -0,0 +1,110 @@
|
||||
name: compound-eye
|
||||
description: 複眼レビュー - 同じ指示を Claude と Codex に同時に投げ、両者の回答を統合する
|
||||
max_iterations: 10
|
||||
knowledge:
|
||||
architecture: ../knowledge/architecture.md
|
||||
personas:
|
||||
coder: ../personas/coder.md
|
||||
supervisor: ../personas/supervisor.md
|
||||
initial_movement: evaluate
|
||||
|
||||
movements:
|
||||
- name: evaluate
|
||||
parallel:
|
||||
- name: claude-eye
|
||||
edit: false
|
||||
persona: coder
|
||||
provider: claude
|
||||
knowledge: architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- Bash
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: done
|
||||
- condition: failed
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 01-claude.md
|
||||
|
||||
- name: codex-eye
|
||||
edit: false
|
||||
persona: coder
|
||||
provider: codex
|
||||
knowledge: architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- Bash
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: done
|
||||
- condition: failed
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 02-codex.md
|
||||
rules:
|
||||
- condition: any("done")
|
||||
next: synthesize
|
||||
|
||||
- name: synthesize
|
||||
edit: false
|
||||
persona: supervisor
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
rules:
|
||||
- condition: 統合完了
|
||||
next: COMPLETE
|
||||
instruction_template: |
|
||||
2つのモデル(Claude / Codex)が同じ指示に対して独立に回答しました。
|
||||
両者の回答を統合してください。
|
||||
|
||||
**やること:**
|
||||
1. Report Directory 内のレポートを読む
|
||||
- `01-claude.md`(Claude の回答)
|
||||
- `02-codex.md`(Codex の回答)
|
||||
※ 片方が存在しない場合(エラーで失敗した場合)、存在するレポートのみで統合する
|
||||
2. 両方のレポートがある場合は比較し、以下を明示する
|
||||
- 一致している点
|
||||
- 相違している点
|
||||
- 片方だけが指摘・言及している点
|
||||
3. 統合した結論を出す
|
||||
|
||||
**出力フォーマット:**
|
||||
```markdown
|
||||
# 複眼レビュー統合
|
||||
|
||||
## 結論
|
||||
{統合した結論}
|
||||
|
||||
## 回答状況
|
||||
| モデル | 状態 |
|
||||
|--------|------|
|
||||
| Claude | ✅ / ❌ |
|
||||
| Codex | ✅ / ❌ |
|
||||
|
||||
## 一致点
|
||||
- {両モデルが同じ見解を示した点}
|
||||
|
||||
## 相違点
|
||||
| 論点 | Claude | Codex |
|
||||
|------|--------|-------|
|
||||
| {論点} | {Claudeの見解} | {Codexの見解} |
|
||||
|
||||
## 片方のみの指摘
|
||||
- **Claude のみ:** {Claudeだけが言及した点}
|
||||
- **Codex のみ:** {Codexだけが言及した点}
|
||||
|
||||
## 総合評価
|
||||
{両者の回答を踏まえた総合的な評価}
|
||||
```
|
||||
output_contracts:
|
||||
report:
|
||||
- Summary: 03-synthesis.md
|
||||
@ -9,7 +9,6 @@ knowledge:
|
||||
backend: ../knowledge/backend.md
|
||||
personas:
|
||||
planner: ../personas/planner.md
|
||||
architect-planner: ../personas/architect-planner.md
|
||||
coder: ../personas/coder.md
|
||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
||||
@ -17,7 +16,6 @@ personas:
|
||||
supervisor: ../personas/supervisor.md
|
||||
instructions:
|
||||
plan: ../instructions/plan.md
|
||||
architect: ../instructions/architect.md
|
||||
implement: ../instructions/implement.md
|
||||
ai-review: ../instructions/ai-review.md
|
||||
ai-fix: ../instructions/ai-fix.md
|
||||
@ -28,7 +26,6 @@ instructions:
|
||||
supervise: ../instructions/supervise.md
|
||||
report_formats:
|
||||
plan: ../output-contracts/plan.md
|
||||
architecture-design: ../output-contracts/architecture-design.md
|
||||
ai-review: ../output-contracts/ai-review.md
|
||||
architecture-review: ../output-contracts/architecture-review.md
|
||||
qa-review: ../output-contracts/qa-review.md
|
||||
@ -64,6 +61,7 @@ movements:
|
||||
- name: plan
|
||||
edit: false
|
||||
persona: planner
|
||||
knowledge: architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
@ -73,7 +71,7 @@ movements:
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: 要件が明確で実装可能
|
||||
next: architect
|
||||
next: implement
|
||||
- condition: ユーザーが質問をしている(実装タスクではない)
|
||||
next: COMPLETE
|
||||
- condition: 要件が不明確、情報不足
|
||||
@ -87,27 +85,6 @@ movements:
|
||||
report:
|
||||
- name: 00-plan.md
|
||||
format: plan
|
||||
- name: architect
|
||||
edit: false
|
||||
persona: architect-planner
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: 小規模タスク(設計不要)
|
||||
next: implement
|
||||
- condition: 設計完了
|
||||
next: implement
|
||||
- condition: 情報不足、判断できない
|
||||
next: ABORT
|
||||
instruction: architect
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 01-architecture.md
|
||||
format: architecture-design
|
||||
- name: implement
|
||||
edit: true
|
||||
persona: coder
|
||||
|
||||
@ -12,7 +12,6 @@ knowledge:
|
||||
backend: ../knowledge/backend.md
|
||||
personas:
|
||||
planner: ../personas/planner.md
|
||||
architect-planner: ../personas/architect-planner.md
|
||||
coder: ../personas/coder.md
|
||||
ai-antipattern-reviewer: ../personas/ai-antipattern-reviewer.md
|
||||
architecture-reviewer: ../personas/architecture-reviewer.md
|
||||
@ -20,7 +19,6 @@ personas:
|
||||
supervisor: ../personas/supervisor.md
|
||||
instructions:
|
||||
plan: ../instructions/plan.md
|
||||
architect: ../instructions/architect.md
|
||||
implement: ../instructions/implement.md
|
||||
ai-review: ../instructions/ai-review.md
|
||||
ai-fix: ../instructions/ai-fix.md
|
||||
@ -59,6 +57,7 @@ movements:
|
||||
- name: plan
|
||||
edit: false
|
||||
persona: planner
|
||||
knowledge: architecture
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
@ -68,7 +67,7 @@ movements:
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: 要件が明確で実装可能
|
||||
next: architect
|
||||
next: implement
|
||||
- condition: ユーザーが質問をしている(実装タスクではない)
|
||||
next: COMPLETE
|
||||
- condition: 要件が不明確、情報不足
|
||||
@ -82,27 +81,7 @@ movements:
|
||||
report:
|
||||
- name: 00-plan.md
|
||||
format: plan
|
||||
- name: architect
|
||||
edit: false
|
||||
persona: architect-planner
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
rules:
|
||||
- condition: 小規模タスク(設計不要)
|
||||
next: implement
|
||||
- condition: 設計完了
|
||||
next: implement
|
||||
- condition: 情報不足、判断できない
|
||||
next: ABORT
|
||||
instruction: architect
|
||||
output_contracts:
|
||||
report:
|
||||
- name: 01-architecture.md
|
||||
format: architecture-design
|
||||
|
||||
- name: implement
|
||||
edit: true
|
||||
persona: coder
|
||||
@ -139,6 +118,7 @@ movements:
|
||||
report:
|
||||
- Scope: 02-coder-scope.md
|
||||
- Decisions: 03-coder-decisions.md
|
||||
|
||||
- name: ai_review
|
||||
edit: false
|
||||
persona: ai-antipattern-reviewer
|
||||
@ -161,6 +141,7 @@ movements:
|
||||
report:
|
||||
- name: 04-ai-review.md
|
||||
format: ai-review
|
||||
|
||||
- name: ai_fix
|
||||
edit: true
|
||||
persona: coder
|
||||
@ -189,6 +170,7 @@ movements:
|
||||
- condition: 判断できない、情報不足
|
||||
next: ai_no_fix
|
||||
instruction: ai-fix
|
||||
|
||||
- name: ai_no_fix
|
||||
edit: false
|
||||
persona: architecture-reviewer
|
||||
@ -203,6 +185,7 @@ movements:
|
||||
- condition: ai_fixの判断が妥当(修正不要)
|
||||
next: reviewers
|
||||
instruction: arbitrate
|
||||
|
||||
- name: reviewers
|
||||
parallel:
|
||||
- name: arch-review
|
||||
@ -251,6 +234,7 @@ movements:
|
||||
next: supervise
|
||||
- condition: any("needs_fix")
|
||||
next: fix
|
||||
|
||||
- name: fix
|
||||
edit: true
|
||||
persona: coder
|
||||
@ -276,6 +260,7 @@ movements:
|
||||
- condition: 判断できない、情報不足
|
||||
next: plan
|
||||
instruction: fix
|
||||
|
||||
- name: supervise
|
||||
edit: false
|
||||
persona: supervisor
|
||||
@ -299,7 +284,6 @@ movements:
|
||||
- Summary: summary.md
|
||||
report_formats:
|
||||
plan: ../output-contracts/plan.md
|
||||
architecture-design: ../output-contracts/architecture-design.md
|
||||
ai-review: ../output-contracts/ai-review.md
|
||||
architecture-review: ../output-contracts/architecture-review.md
|
||||
qa-review: ../output-contracts/qa-review.md
|
||||
|
||||
@ -17,6 +17,7 @@
|
||||
| 状況 | 判定 | 対応 |
|
||||
|------|------|------|
|
||||
| 今回の変更で導入された問題 | ブロッキング | REJECT |
|
||||
| 今回の変更により未使用になったコード(引数、import、変数、関数) | ブロッキング | REJECT(変更起因の問題) |
|
||||
| 変更ファイル内の既存問題 | ブロッキング | REJECT(ボーイスカウトルール) |
|
||||
| 変更モジュール内の構造的問題 | ブロッキング | スコープ内なら REJECT |
|
||||
| 変更外ファイルの問題 | 非ブロッキング | 記録のみ(参考情報) |
|
||||
@ -107,10 +108,18 @@
|
||||
| 冗長な式(同値の短い書き方がある) | REJECT |
|
||||
| 不要な分岐・条件(到達しない、または常に同じ結果) | REJECT |
|
||||
| 数秒〜数分で修正可能な問題 | REJECT(「非ブロッキング」にしない) |
|
||||
| 変更の結果として未使用になったコード(引数・import等) | REJECT — 変更起因であり「既存問題」ではない |
|
||||
| 修正にリファクタリングが必要(スコープが大きい) | 記録のみ(技術的負債) |
|
||||
|
||||
既存コードの踏襲を理由にした問題の放置は認めない。既存コードが悪い場合、それに合わせるのではなく改善する。
|
||||
|
||||
## 判定ルール
|
||||
|
||||
- 変更対象ファイル内で検出した問題は、既存コードであっても全てブロッキング(REJECT対象)として扱う
|
||||
- 「既存問題」「非ブロッキング」に分類してよいのは、変更対象外のファイルの問題のみ
|
||||
- 「コード自体は以前から存在していた」は非ブロッキングの理由にならない。変更ファイル内にある以上、ボーイスカウトルールが適用される
|
||||
- 問題が1件でもあればREJECT。「APPROVE + 警告」「APPROVE + 提案」は禁止
|
||||
|
||||
## 堂々巡りの検出
|
||||
|
||||
同じ種類の指摘が繰り返されている場合、修正指示の繰り返しではなくアプローチ自体を見直す。
|
||||
|
||||
@ -258,6 +258,14 @@ takt clear
|
||||
# ビルトインピース・エージェントを Claude Code Skill としてデプロイ
|
||||
takt export-cc
|
||||
|
||||
# 利用可能なファセットをレイヤー別に一覧表示
|
||||
takt catalog
|
||||
takt catalog personas
|
||||
|
||||
# 特定のファセットをカスタマイズ用にコピー
|
||||
takt eject persona coder
|
||||
takt eject instruction plan --global
|
||||
|
||||
# 各ムーブメント・フェーズの組み立て済みプロンプトをプレビュー
|
||||
takt prompt [piece]
|
||||
|
||||
@ -428,15 +436,16 @@ TAKTには複数のビルトインピースが同梱されています:
|
||||
|
||||
| ピース | 説明 |
|
||||
|------------|------|
|
||||
| `default` | フル開発ピース: 計画 → アーキテクチャ設計 → 実装 → AI レビュー → 並列レビュー(アーキテクト+セキュリティ)→ スーパーバイザー承認。各レビュー段階に修正ループあり。 |
|
||||
| `default` | フル開発ピース: 計画 → 実装 → AI レビュー → 並列レビュー(アーキテクト+QA)→ スーパーバイザー承認。各レビュー段階に修正ループあり。 |
|
||||
| `minimal` | クイックピース: 計画 → 実装 → レビュー → スーパーバイザー。高速イテレーション向けの最小構成。 |
|
||||
| `review-fix-minimal` | レビュー重視ピース: レビュー → 修正 → スーパーバイザー。レビューフィードバックに基づく反復改善向け。 |
|
||||
| `research` | リサーチピース: プランナー → ディガー → スーパーバイザー。質問せずに自律的にリサーチを実行。 |
|
||||
| `expert` | フルスタック開発ピース: アーキテクチャ、フロントエンド、セキュリティ、QA レビューと修正ループ。 |
|
||||
| `expert-cqrs` | フルスタック開発ピース(CQRS+ES特化): CQRS+ES、フロントエンド、セキュリティ、QA レビューと修正ループ。 |
|
||||
| `magi` | エヴァンゲリオンにインスパイアされた審議システム。3つの AI ペルソナ(MELCHIOR、BALTHASAR、CASPER)が分析し投票。 |
|
||||
| `coding` | 軽量開発ピース: architect-planner → 実装 → 並列レビュー(AI アンチパターン+アーキテクチャ)→ 修正。スーパーバイザーなしの高速フィードバックループ。 |
|
||||
| `coding` | 軽量開発ピース: planner → 実装 → 並列レビュー(AI アンチパターン+アーキテクチャ)→ 修正。スーパーバイザーなしの高速フィードバックループ。 |
|
||||
| `passthrough` | 最小構成。タスクをそのまま coder に渡す薄いラッパー。レビューなし。 |
|
||||
| `compound-eye` | マルチモデルレビュー: Claude と Codex に同じ指示を同時送信し、両方の回答を統合。 |
|
||||
| `review-only` | 変更を加えない読み取り専用のコードレビューピース。 |
|
||||
|
||||
**Hybrid Codex バリアント** (`*-hybrid-codex`): 主要ピースごとに、coder エージェントを Codex で実行しレビュアーは Claude を使うハイブリッド構成が用意されています。対象: default, minimal, expert, expert-cqrs, passthrough, review-fix-minimal, coding。
|
||||
@ -462,6 +471,7 @@ TAKTには複数のビルトインピースが同梱されています:
|
||||
| **research-planner** | リサーチタスクの計画・スコープ定義 |
|
||||
| **research-digger** | 深掘り調査と情報収集 |
|
||||
| **research-supervisor** | リサーチ品質の検証と網羅性の評価 |
|
||||
| **pr-commenter** | レビュー結果を GitHub PR にコメントとして投稿 |
|
||||
|
||||
## カスタムペルソナ
|
||||
|
||||
@ -527,6 +537,9 @@ provider: claude # デフォルトプロバイダー: claude または c
|
||||
model: sonnet # デフォルトモデル(オプション)
|
||||
branch_name_strategy: romaji # ブランチ名生成: 'romaji'(高速)または 'ai'(低速)
|
||||
prevent_sleep: false # macOS の実行中スリープ防止(caffeinate)
|
||||
notification_sound: true # 通知音の有効/無効
|
||||
concurrency: 1 # takt run の並列タスク数(1-10、デフォルト: 1 = 逐次実行)
|
||||
interactive_preview_movements: 3 # 対話モードでのムーブメントプレビュー数(0-10、デフォルト: 3)
|
||||
|
||||
# API Key 設定(オプション)
|
||||
# 環境変数 TAKT_ANTHROPIC_API_KEY / TAKT_OPENAI_API_KEY で上書き可能
|
||||
@ -742,6 +755,7 @@ rules:
|
||||
| `permission_mode` | - | パーミッションモード: `readonly`、`edit`、`full`(プロバイダー非依存) |
|
||||
| `output_contracts` | - | レポートファイルの出力契約定義 |
|
||||
| `quality_gates` | - | ムーブメント完了要件のAIディレクティブ |
|
||||
| `mcp_servers` | - | MCP(Model Context Protocol)サーバー設定(stdio/SSE/HTTP) |
|
||||
|
||||
## API使用例
|
||||
|
||||
|
||||
@ -66,7 +66,7 @@ describe('E2E: Eject builtin pieces (takt eject)', () => {
|
||||
expect(result.stdout).toContain('Available builtin pieces');
|
||||
});
|
||||
|
||||
it('should eject piece to project .takt/ by default', () => {
|
||||
it('should eject piece YAML only to project .takt/ by default', () => {
|
||||
const result = runTakt({
|
||||
args: ['eject', 'default'],
|
||||
cwd: repo.path,
|
||||
@ -79,14 +79,12 @@ describe('E2E: Eject builtin pieces (takt eject)', () => {
|
||||
const piecePath = join(repo.path, '.takt', 'pieces', 'default.yaml');
|
||||
expect(existsSync(piecePath)).toBe(true);
|
||||
|
||||
// Personas should be in project .takt/personas/
|
||||
// Personas should NOT be copied (resolved via layer system)
|
||||
const personasDir = join(repo.path, '.takt', 'personas');
|
||||
expect(existsSync(personasDir)).toBe(true);
|
||||
expect(existsSync(join(personasDir, 'coder.md'))).toBe(true);
|
||||
expect(existsSync(join(personasDir, 'planner.md'))).toBe(true);
|
||||
expect(existsSync(personasDir)).toBe(false);
|
||||
});
|
||||
|
||||
it('should preserve relative persona paths in ejected piece (no rewriting)', () => {
|
||||
it('should preserve content of builtin piece YAML as-is', () => {
|
||||
runTakt({
|
||||
args: ['eject', 'default'],
|
||||
cwd: repo.path,
|
||||
@ -96,13 +94,13 @@ describe('E2E: Eject builtin pieces (takt eject)', () => {
|
||||
const piecePath = join(repo.path, '.takt', 'pieces', 'default.yaml');
|
||||
const content = readFileSync(piecePath, 'utf-8');
|
||||
|
||||
// Relative paths should be preserved as ../personas/
|
||||
expect(content).toContain('../personas/');
|
||||
// Content should be an exact copy of builtin — paths preserved as-is
|
||||
expect(content).toContain('name: default');
|
||||
// Should NOT contain rewritten absolute paths
|
||||
expect(content).not.toContain('~/.takt/personas/');
|
||||
});
|
||||
|
||||
it('should eject piece to global ~/.takt/ with --global flag', () => {
|
||||
it('should eject piece YAML only to global ~/.takt/ with --global flag', () => {
|
||||
const result = runTakt({
|
||||
args: ['eject', 'default', '--global'],
|
||||
cwd: repo.path,
|
||||
@ -115,10 +113,9 @@ describe('E2E: Eject builtin pieces (takt eject)', () => {
|
||||
const piecePath = join(isolatedEnv.taktDir, 'pieces', 'default.yaml');
|
||||
expect(existsSync(piecePath)).toBe(true);
|
||||
|
||||
// Personas should be in global personas dir
|
||||
// Personas should NOT be copied (resolved via layer system)
|
||||
const personasDir = join(isolatedEnv.taktDir, 'personas');
|
||||
expect(existsSync(personasDir)).toBe(true);
|
||||
expect(existsSync(join(personasDir, 'coder.md'))).toBe(true);
|
||||
expect(existsSync(personasDir)).toBe(false);
|
||||
|
||||
// Should NOT be in project dir
|
||||
const projectPiecePath = join(repo.path, '.takt', 'pieces', 'default.yaml');
|
||||
@ -155,7 +152,7 @@ describe('E2E: Eject builtin pieces (takt eject)', () => {
|
||||
expect(result.stdout).toContain('not found');
|
||||
});
|
||||
|
||||
it('should correctly eject personas for pieces with unique personas', () => {
|
||||
it('should eject piece YAML only for pieces with unique personas', () => {
|
||||
const result = runTakt({
|
||||
args: ['eject', 'magi'],
|
||||
cwd: repo.path,
|
||||
@ -164,14 +161,80 @@ describe('E2E: Eject builtin pieces (takt eject)', () => {
|
||||
|
||||
expect(result.exitCode).toBe(0);
|
||||
|
||||
// MAGI piece should have its personas ejected
|
||||
// Piece YAML should be copied
|
||||
const piecePath = join(repo.path, '.takt', 'pieces', 'magi.yaml');
|
||||
expect(existsSync(piecePath)).toBe(true);
|
||||
|
||||
// Personas should NOT be copied (resolved via layer system)
|
||||
const personasDir = join(repo.path, '.takt', 'personas');
|
||||
expect(existsSync(join(personasDir, 'melchior.md'))).toBe(true);
|
||||
expect(existsSync(join(personasDir, 'balthasar.md'))).toBe(true);
|
||||
expect(existsSync(join(personasDir, 'casper.md'))).toBe(true);
|
||||
expect(existsSync(personasDir)).toBe(false);
|
||||
});
|
||||
|
||||
it('should preserve relative paths for global eject too', () => {
|
||||
it('should eject individual facet to project .takt/', () => {
|
||||
const result = runTakt({
|
||||
args: ['eject', 'persona', 'coder'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
expect(result.exitCode).toBe(0);
|
||||
|
||||
// Persona should be copied to project .takt/personas/
|
||||
const personaPath = join(repo.path, '.takt', 'personas', 'coder.md');
|
||||
expect(existsSync(personaPath)).toBe(true);
|
||||
const content = readFileSync(personaPath, 'utf-8');
|
||||
expect(content.length).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
it('should eject individual facet to global ~/.takt/ with --global', () => {
|
||||
const result = runTakt({
|
||||
args: ['eject', 'persona', 'coder', '--global'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
expect(result.exitCode).toBe(0);
|
||||
|
||||
// Persona should be copied to global dir
|
||||
const personaPath = join(isolatedEnv.taktDir, 'personas', 'coder.md');
|
||||
expect(existsSync(personaPath)).toBe(true);
|
||||
|
||||
// Should NOT be in project dir
|
||||
const projectPersonaPath = join(repo.path, '.takt', 'personas', 'coder.md');
|
||||
expect(existsSync(projectPersonaPath)).toBe(false);
|
||||
});
|
||||
|
||||
it('should skip eject facet when already exists', () => {
|
||||
// First eject
|
||||
runTakt({
|
||||
args: ['eject', 'persona', 'coder'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
// Second eject — should skip
|
||||
const result = runTakt({
|
||||
args: ['eject', 'persona', 'coder'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
expect(result.exitCode).toBe(0);
|
||||
expect(result.stdout).toContain('Already exists');
|
||||
});
|
||||
|
||||
it('should report error for non-existent facet', () => {
|
||||
const result = runTakt({
|
||||
args: ['eject', 'persona', 'nonexistent-xyz'],
|
||||
cwd: repo.path,
|
||||
env: isolatedEnv.env,
|
||||
});
|
||||
|
||||
expect(result.exitCode).toBe(0);
|
||||
expect(result.stdout).toContain('not found');
|
||||
});
|
||||
|
||||
it('should preserve content of builtin piece YAML for global eject', () => {
|
||||
runTakt({
|
||||
args: ['eject', 'magi', '--global'],
|
||||
cwd: repo.path,
|
||||
@ -181,7 +244,7 @@ describe('E2E: Eject builtin pieces (takt eject)', () => {
|
||||
const piecePath = join(isolatedEnv.taktDir, 'pieces', 'magi.yaml');
|
||||
const content = readFileSync(piecePath, 'utf-8');
|
||||
|
||||
expect(content).toContain('../personas/');
|
||||
expect(content).toContain('name: magi');
|
||||
expect(content).not.toContain('~/.takt/personas/');
|
||||
});
|
||||
});
|
||||
|
||||
12
package-lock.json
generated
12
package-lock.json
generated
@ -1,15 +1,15 @@
|
||||
{
|
||||
"name": "takt",
|
||||
"version": "0.8.0",
|
||||
"version": "0.9.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "takt",
|
||||
"version": "0.8.0",
|
||||
"version": "0.9.0",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@anthropic-ai/claude-agent-sdk": "^0.2.34",
|
||||
"@anthropic-ai/claude-agent-sdk": "^0.2.37",
|
||||
"@openai/codex-sdk": "^0.98.0",
|
||||
"chalk": "^5.3.0",
|
||||
"commander": "^12.1.0",
|
||||
@ -39,9 +39,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@anthropic-ai/claude-agent-sdk": {
|
||||
"version": "0.2.34",
|
||||
"resolved": "https://registry.npmjs.org/@anthropic-ai/claude-agent-sdk/-/claude-agent-sdk-0.2.34.tgz",
|
||||
"integrity": "sha512-QLHd3Nt7bGU7/YH71fXFaztM9fNxGGruzTMrTYJkbm5gYJl5ZyU2zGyoE5VpWC0e1QU0yYdNdBVgqSYDcJGufg==",
|
||||
"version": "0.2.37",
|
||||
"resolved": "https://registry.npmjs.org/@anthropic-ai/claude-agent-sdk/-/claude-agent-sdk-0.2.37.tgz",
|
||||
"integrity": "sha512-0TCAUuGXiWYV2JK+j2SiakGzPA7aoR5DNRxZ0EA571loGIqN3FRfiO1kipeBpEc+cRQ03a/4Kt5YAjMx0KBW+A==",
|
||||
"license": "SEE LICENSE IN README.md",
|
||||
"engines": {
|
||||
"node": ">=18.0.0"
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "takt",
|
||||
"version": "0.8.0",
|
||||
"version": "0.9.0",
|
||||
"description": "TAKT: Task Agent Koordination Tool - AI Agent Piece Orchestration",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
@ -57,7 +57,7 @@
|
||||
"builtins/"
|
||||
],
|
||||
"dependencies": {
|
||||
"@anthropic-ai/claude-agent-sdk": "^0.2.34",
|
||||
"@anthropic-ai/claude-agent-sdk": "^0.2.37",
|
||||
"@openai/codex-sdk": "^0.98.0",
|
||||
"chalk": "^5.3.0",
|
||||
"commander": "^12.1.0",
|
||||
|
||||
@ -53,7 +53,8 @@ vi.mock('../infra/config/loaders/pieceResolver.js', () => ({
|
||||
getPieceDescription: vi.fn(() => ({
|
||||
name: 'default',
|
||||
description: '',
|
||||
pieceStructure: '1. implement\n2. review'
|
||||
pieceStructure: '1. implement\n2. review',
|
||||
movementPreviews: [],
|
||||
})),
|
||||
}));
|
||||
|
||||
|
||||
373
src/__tests__/catalog.test.ts
Normal file
373
src/__tests__/catalog.test.ts
Normal file
@ -0,0 +1,373 @@
|
||||
/**
|
||||
* Tests for facet catalog scanning and display.
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||
import { mkdtempSync, writeFileSync, mkdirSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import {
|
||||
extractDescription,
|
||||
parseFacetType,
|
||||
scanFacets,
|
||||
displayFacets,
|
||||
showCatalog,
|
||||
type FacetEntry,
|
||||
} from '../features/catalog/catalogFacets.js';
|
||||
|
||||
// Mock external dependencies to isolate unit tests
|
||||
vi.mock('../infra/config/global/globalConfig.js', () => ({
|
||||
getLanguage: () => 'en',
|
||||
getBuiltinPiecesEnabled: () => true,
|
||||
}));
|
||||
|
||||
const mockLogError = vi.fn();
|
||||
const mockInfo = vi.fn();
|
||||
vi.mock('../shared/ui/index.js', () => ({
|
||||
error: (...args: unknown[]) => mockLogError(...args),
|
||||
info: (...args: unknown[]) => mockInfo(...args),
|
||||
section: (title: string) => console.log(title),
|
||||
}));
|
||||
|
||||
let mockBuiltinDir: string;
|
||||
vi.mock('../infra/resources/index.js', () => ({
|
||||
getLanguageResourcesDir: () => mockBuiltinDir,
|
||||
}));
|
||||
|
||||
let mockGlobalDir: string;
|
||||
vi.mock('../infra/config/paths.js', () => ({
|
||||
getGlobalConfigDir: () => mockGlobalDir,
|
||||
getProjectConfigDir: (cwd: string) => join(cwd, '.takt'),
|
||||
}));
|
||||
|
||||
describe('parseFacetType', () => {
|
||||
it('should return FacetType for valid inputs', () => {
|
||||
expect(parseFacetType('personas')).toBe('personas');
|
||||
expect(parseFacetType('policies')).toBe('policies');
|
||||
expect(parseFacetType('knowledge')).toBe('knowledge');
|
||||
expect(parseFacetType('instructions')).toBe('instructions');
|
||||
expect(parseFacetType('output-contracts')).toBe('output-contracts');
|
||||
});
|
||||
|
||||
it('should return null for invalid inputs', () => {
|
||||
expect(parseFacetType('unknown')).toBeNull();
|
||||
expect(parseFacetType('persona')).toBeNull();
|
||||
expect(parseFacetType('')).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('extractDescription', () => {
|
||||
let tempDir: string;
|
||||
|
||||
beforeEach(() => {
|
||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-catalog-test-'));
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(tempDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('should extract first heading from markdown file', () => {
|
||||
const filePath = join(tempDir, 'test.md');
|
||||
writeFileSync(filePath, '# My Persona\n\nSome content here.');
|
||||
|
||||
expect(extractDescription(filePath)).toBe('My Persona');
|
||||
});
|
||||
|
||||
it('should return first non-empty line when no heading exists', () => {
|
||||
const filePath = join(tempDir, 'test.md');
|
||||
writeFileSync(filePath, 'No heading in this file\nJust plain text.');
|
||||
|
||||
expect(extractDescription(filePath)).toBe('No heading in this file');
|
||||
});
|
||||
|
||||
it('should return empty string when file is empty', () => {
|
||||
const filePath = join(tempDir, 'test.md');
|
||||
writeFileSync(filePath, '');
|
||||
|
||||
expect(extractDescription(filePath)).toBe('');
|
||||
});
|
||||
|
||||
it('should skip blank lines and return first non-empty line', () => {
|
||||
const filePath = join(tempDir, 'test.md');
|
||||
writeFileSync(filePath, '\n\n \nActual content here\nMore text.');
|
||||
|
||||
expect(extractDescription(filePath)).toBe('Actual content here');
|
||||
});
|
||||
|
||||
it('should extract from first heading, ignoring later headings', () => {
|
||||
const filePath = join(tempDir, 'test.md');
|
||||
writeFileSync(filePath, 'Preamble\n# First Heading\n# Second Heading');
|
||||
|
||||
expect(extractDescription(filePath)).toBe('First Heading');
|
||||
});
|
||||
|
||||
it('should trim whitespace from heading text', () => {
|
||||
const filePath = join(tempDir, 'test.md');
|
||||
writeFileSync(filePath, '# Spaced Heading \n');
|
||||
|
||||
expect(extractDescription(filePath)).toBe('Spaced Heading');
|
||||
});
|
||||
});
|
||||
|
||||
describe('scanFacets', () => {
|
||||
let tempDir: string;
|
||||
let builtinDir: string;
|
||||
let globalDir: string;
|
||||
let projectDir: string;
|
||||
|
||||
beforeEach(() => {
|
||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-catalog-test-'));
|
||||
builtinDir = join(tempDir, 'builtin-lang');
|
||||
globalDir = join(tempDir, 'global');
|
||||
projectDir = join(tempDir, 'project');
|
||||
|
||||
mockBuiltinDir = builtinDir;
|
||||
mockGlobalDir = globalDir;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(tempDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('should collect facets from all three layers', () => {
|
||||
// Given: facets in builtin, user, and project layers
|
||||
const builtinPersonas = join(builtinDir, 'personas');
|
||||
const globalPersonas = join(globalDir, 'personas');
|
||||
const projectPersonas = join(projectDir, '.takt', 'personas');
|
||||
mkdirSync(builtinPersonas, { recursive: true });
|
||||
mkdirSync(globalPersonas, { recursive: true });
|
||||
mkdirSync(projectPersonas, { recursive: true });
|
||||
|
||||
writeFileSync(join(builtinPersonas, 'coder.md'), '# Coder Agent');
|
||||
writeFileSync(join(globalPersonas, 'my-reviewer.md'), '# My Reviewer');
|
||||
writeFileSync(join(projectPersonas, 'project-coder.md'), '# Project Coder');
|
||||
|
||||
// When: scanning personas
|
||||
const entries = scanFacets('personas', projectDir);
|
||||
|
||||
// Then: all three entries are collected
|
||||
expect(entries).toHaveLength(3);
|
||||
|
||||
const coder = entries.find((e) => e.name === 'coder');
|
||||
expect(coder).toBeDefined();
|
||||
expect(coder!.source).toBe('builtin');
|
||||
expect(coder!.description).toBe('Coder Agent');
|
||||
|
||||
const myReviewer = entries.find((e) => e.name === 'my-reviewer');
|
||||
expect(myReviewer).toBeDefined();
|
||||
expect(myReviewer!.source).toBe('user');
|
||||
|
||||
const projectCoder = entries.find((e) => e.name === 'project-coder');
|
||||
expect(projectCoder).toBeDefined();
|
||||
expect(projectCoder!.source).toBe('project');
|
||||
});
|
||||
|
||||
it('should detect override when higher layer has same name', () => {
|
||||
// Given: same facet name in builtin and user layers
|
||||
const builtinPersonas = join(builtinDir, 'personas');
|
||||
const globalPersonas = join(globalDir, 'personas');
|
||||
mkdirSync(builtinPersonas, { recursive: true });
|
||||
mkdirSync(globalPersonas, { recursive: true });
|
||||
|
||||
writeFileSync(join(builtinPersonas, 'coder.md'), '# Builtin Coder');
|
||||
writeFileSync(join(globalPersonas, 'coder.md'), '# Custom Coder');
|
||||
|
||||
// When: scanning personas
|
||||
const entries = scanFacets('personas', tempDir);
|
||||
|
||||
// Then: builtin entry is marked as overridden by user
|
||||
const builtinCoder = entries.find((e) => e.name === 'coder' && e.source === 'builtin');
|
||||
expect(builtinCoder).toBeDefined();
|
||||
expect(builtinCoder!.overriddenBy).toBe('user');
|
||||
|
||||
const userCoder = entries.find((e) => e.name === 'coder' && e.source === 'user');
|
||||
expect(userCoder).toBeDefined();
|
||||
expect(userCoder!.overriddenBy).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should detect override through project layer', () => {
|
||||
// Given: same facet name in builtin and project layers
|
||||
const builtinPolicies = join(builtinDir, 'policies');
|
||||
const projectPolicies = join(projectDir, '.takt', 'policies');
|
||||
mkdirSync(builtinPolicies, { recursive: true });
|
||||
mkdirSync(projectPolicies, { recursive: true });
|
||||
|
||||
writeFileSync(join(builtinPolicies, 'coding.md'), '# Builtin Coding');
|
||||
writeFileSync(join(projectPolicies, 'coding.md'), '# Project Coding');
|
||||
|
||||
// When: scanning policies
|
||||
const entries = scanFacets('policies', projectDir);
|
||||
|
||||
// Then: builtin entry is marked as overridden by project
|
||||
const builtinCoding = entries.find((e) => e.name === 'coding' && e.source === 'builtin');
|
||||
expect(builtinCoding).toBeDefined();
|
||||
expect(builtinCoding!.overriddenBy).toBe('project');
|
||||
});
|
||||
|
||||
it('should handle non-existent directories gracefully', () => {
|
||||
// Given: no directories exist
|
||||
// When: scanning a facet type
|
||||
const entries = scanFacets('knowledge', projectDir);
|
||||
|
||||
// Then: returns empty array
|
||||
expect(entries).toEqual([]);
|
||||
});
|
||||
|
||||
it('should only include .md files', () => {
|
||||
// Given: directory with mixed file types
|
||||
const builtinKnowledge = join(builtinDir, 'knowledge');
|
||||
mkdirSync(builtinKnowledge, { recursive: true });
|
||||
|
||||
writeFileSync(join(builtinKnowledge, 'valid.md'), '# Valid');
|
||||
writeFileSync(join(builtinKnowledge, 'ignored.txt'), 'Not a markdown');
|
||||
writeFileSync(join(builtinKnowledge, 'also-ignored.yaml'), 'name: yaml');
|
||||
|
||||
// When: scanning knowledge
|
||||
const entries = scanFacets('knowledge', tempDir);
|
||||
|
||||
// Then: only .md file is included
|
||||
expect(entries).toHaveLength(1);
|
||||
expect(entries[0]!.name).toBe('valid');
|
||||
});
|
||||
|
||||
it('should work with all facet types', () => {
|
||||
// Given: one facet in each type directory
|
||||
const types = ['personas', 'policies', 'knowledge', 'instructions', 'output-contracts'] as const;
|
||||
for (const type of types) {
|
||||
const dir = join(builtinDir, type);
|
||||
mkdirSync(dir, { recursive: true });
|
||||
writeFileSync(join(dir, 'test.md'), `# Test ${type}`);
|
||||
}
|
||||
|
||||
// When/Then: each type is scannable
|
||||
for (const type of types) {
|
||||
const entries = scanFacets(type, tempDir);
|
||||
expect(entries).toHaveLength(1);
|
||||
expect(entries[0]!.name).toBe('test');
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
describe('displayFacets', () => {
|
||||
let consoleSpy: ReturnType<typeof vi.spyOn>;
|
||||
|
||||
beforeEach(() => {
|
||||
consoleSpy = vi.spyOn(console, 'log').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
consoleSpy.mockRestore();
|
||||
});
|
||||
|
||||
it('should display entries with name, description, and source', () => {
|
||||
// Given: a list of facet entries
|
||||
const entries: FacetEntry[] = [
|
||||
{ name: 'coder', description: 'Coder Agent', source: 'builtin' },
|
||||
{ name: 'my-reviewer', description: 'My Reviewer', source: 'user' },
|
||||
];
|
||||
|
||||
// When: displaying facets
|
||||
displayFacets('personas', entries);
|
||||
|
||||
// Then: output contains facet names
|
||||
const output = consoleSpy.mock.calls.map((c) => c[0]).join('\n');
|
||||
expect(output).toContain('coder');
|
||||
expect(output).toContain('my-reviewer');
|
||||
expect(output).toContain('Personas');
|
||||
});
|
||||
|
||||
it('should display (none) when entries are empty', () => {
|
||||
// Given: empty entries
|
||||
const entries: FacetEntry[] = [];
|
||||
|
||||
// When: displaying facets
|
||||
displayFacets('policies', entries);
|
||||
|
||||
// Then: output shows (none)
|
||||
const output = consoleSpy.mock.calls.map((c) => c[0]).join('\n');
|
||||
expect(output).toContain('(none)');
|
||||
});
|
||||
|
||||
it('should display override information', () => {
|
||||
// Given: an overridden entry
|
||||
const entries: FacetEntry[] = [
|
||||
{ name: 'coder', description: 'Builtin Coder', source: 'builtin', overriddenBy: 'user' },
|
||||
];
|
||||
|
||||
// When: displaying facets
|
||||
displayFacets('personas', entries);
|
||||
|
||||
// Then: output contains override info
|
||||
const output = consoleSpy.mock.calls.map((c) => c[0]).join('\n');
|
||||
expect(output).toContain('overridden by user');
|
||||
});
|
||||
});
|
||||
|
||||
describe('showCatalog', () => {
|
||||
let tempDir: string;
|
||||
let builtinDir: string;
|
||||
let globalDir: string;
|
||||
let consoleSpy: ReturnType<typeof vi.spyOn>;
|
||||
|
||||
beforeEach(() => {
|
||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-catalog-test-'));
|
||||
builtinDir = join(tempDir, 'builtin-lang');
|
||||
globalDir = join(tempDir, 'global');
|
||||
|
||||
mockBuiltinDir = builtinDir;
|
||||
mockGlobalDir = globalDir;
|
||||
mockLogError.mockClear();
|
||||
mockInfo.mockClear();
|
||||
consoleSpy = vi.spyOn(console, 'log').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
consoleSpy.mockRestore();
|
||||
rmSync(tempDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('should display only the specified facet type when valid type is given', () => {
|
||||
// Given: personas facet exists
|
||||
const builtinPersonas = join(builtinDir, 'personas');
|
||||
mkdirSync(builtinPersonas, { recursive: true });
|
||||
writeFileSync(join(builtinPersonas, 'coder.md'), '# Coder Agent');
|
||||
|
||||
// When: showing catalog for personas only
|
||||
showCatalog(tempDir, 'personas');
|
||||
|
||||
// Then: output contains the facet name and no error
|
||||
const output = consoleSpy.mock.calls.map((c) => c[0]).join('\n');
|
||||
expect(output).toContain('coder');
|
||||
expect(mockLogError).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should show error when invalid facet type is given', () => {
|
||||
// When: showing catalog for an invalid type
|
||||
showCatalog(tempDir, 'invalid-type');
|
||||
|
||||
// Then: error is logged with the invalid type name
|
||||
expect(mockLogError).toHaveBeenCalledWith(
|
||||
expect.stringContaining('invalid-type'),
|
||||
);
|
||||
// Then: available types are shown via info
|
||||
expect(mockInfo).toHaveBeenCalledWith(
|
||||
expect.stringContaining('personas'),
|
||||
);
|
||||
});
|
||||
|
||||
it('should display all five facet types when no type is specified', () => {
|
||||
// Given: no facets exist (empty directories)
|
||||
|
||||
// When: showing catalog without specifying a type
|
||||
showCatalog(tempDir);
|
||||
|
||||
// Then: all 5 facet type headings are displayed
|
||||
const output = consoleSpy.mock.calls.map((c) => c[0]).join('\n');
|
||||
expect(output).toContain('Personas');
|
||||
expect(output).toContain('Policies');
|
||||
expect(output).toContain('Knowledge');
|
||||
expect(output).toContain('Instructions');
|
||||
expect(output).toContain('Output-contracts');
|
||||
});
|
||||
});
|
||||
259
src/__tests__/cli-routing-issue-resolve.test.ts
Normal file
259
src/__tests__/cli-routing-issue-resolve.test.ts
Normal file
@ -0,0 +1,259 @@
|
||||
/**
|
||||
* Tests for issue resolution in routing module.
|
||||
*
|
||||
* Verifies that issue references (--issue N or #N positional arg)
|
||||
* are resolved before interactive mode and passed to selectAndExecuteTask
|
||||
* via selectOptions.issues.
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
|
||||
vi.mock('../shared/ui/index.js', () => ({
|
||||
info: vi.fn(),
|
||||
error: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
||||
...(await importOriginal<Record<string, unknown>>()),
|
||||
createLogger: () => ({
|
||||
info: vi.fn(),
|
||||
debug: vi.fn(),
|
||||
error: vi.fn(),
|
||||
}),
|
||||
}));
|
||||
|
||||
vi.mock('../infra/github/issue.js', () => ({
|
||||
parseIssueNumbers: vi.fn(() => []),
|
||||
checkGhCli: vi.fn(),
|
||||
fetchIssue: vi.fn(),
|
||||
formatIssueAsTask: vi.fn(),
|
||||
isIssueReference: vi.fn(),
|
||||
resolveIssueTask: vi.fn(),
|
||||
createIssue: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../features/tasks/index.js', () => ({
|
||||
selectAndExecuteTask: vi.fn(),
|
||||
determinePiece: vi.fn(),
|
||||
saveTaskFromInteractive: vi.fn(),
|
||||
createIssueFromTask: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../features/pipeline/index.js', () => ({
|
||||
executePipeline: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../features/interactive/index.js', () => ({
|
||||
interactiveMode: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../infra/config/index.js', () => ({
|
||||
getPieceDescription: vi.fn(() => ({ name: 'default', description: 'test piece', pieceStructure: '', movementPreviews: [] })),
|
||||
loadGlobalConfig: vi.fn(() => ({ interactivePreviewMovements: 3 })),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/constants.js', () => ({
|
||||
DEFAULT_PIECE_NAME: 'default',
|
||||
}));
|
||||
|
||||
const mockOpts: Record<string, unknown> = {};
|
||||
|
||||
vi.mock('../app/cli/program.js', () => {
|
||||
const chainable = {
|
||||
opts: vi.fn(() => mockOpts),
|
||||
argument: vi.fn().mockReturnThis(),
|
||||
action: vi.fn().mockReturnThis(),
|
||||
};
|
||||
return {
|
||||
program: chainable,
|
||||
resolvedCwd: '/test/cwd',
|
||||
pipelineMode: false,
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock('../app/cli/helpers.js', () => ({
|
||||
resolveAgentOverrides: vi.fn(),
|
||||
parseCreateWorktreeOption: vi.fn(),
|
||||
isDirectTask: vi.fn(() => false),
|
||||
}));
|
||||
|
||||
import { checkGhCli, fetchIssue, formatIssueAsTask, parseIssueNumbers } from '../infra/github/issue.js';
|
||||
import { selectAndExecuteTask, determinePiece } from '../features/tasks/index.js';
|
||||
import { interactiveMode } from '../features/interactive/index.js';
|
||||
import { isDirectTask } from '../app/cli/helpers.js';
|
||||
import { executeDefaultAction } from '../app/cli/routing.js';
|
||||
import type { GitHubIssue } from '../infra/github/types.js';
|
||||
|
||||
const mockCheckGhCli = vi.mocked(checkGhCli);
|
||||
const mockFetchIssue = vi.mocked(fetchIssue);
|
||||
const mockFormatIssueAsTask = vi.mocked(formatIssueAsTask);
|
||||
const mockParseIssueNumbers = vi.mocked(parseIssueNumbers);
|
||||
const mockSelectAndExecuteTask = vi.mocked(selectAndExecuteTask);
|
||||
const mockDeterminePiece = vi.mocked(determinePiece);
|
||||
const mockInteractiveMode = vi.mocked(interactiveMode);
|
||||
const mockIsDirectTask = vi.mocked(isDirectTask);
|
||||
|
||||
function createMockIssue(number: number): GitHubIssue {
|
||||
return {
|
||||
number,
|
||||
title: `Issue #${number}`,
|
||||
body: `Body of issue #${number}`,
|
||||
labels: [],
|
||||
comments: [],
|
||||
};
|
||||
}
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
// Reset opts
|
||||
for (const key of Object.keys(mockOpts)) {
|
||||
delete mockOpts[key];
|
||||
}
|
||||
// Default setup
|
||||
mockDeterminePiece.mockResolvedValue('default');
|
||||
mockInteractiveMode.mockResolvedValue({ action: 'execute', task: 'summarized task' });
|
||||
mockIsDirectTask.mockReturnValue(false);
|
||||
mockParseIssueNumbers.mockReturnValue([]);
|
||||
});
|
||||
|
||||
describe('Issue resolution in routing', () => {
|
||||
describe('--issue option', () => {
|
||||
it('should resolve issue and pass to interactive mode when --issue is specified', async () => {
|
||||
// Given
|
||||
mockOpts.issue = 131;
|
||||
const issue131 = createMockIssue(131);
|
||||
mockCheckGhCli.mockReturnValue({ available: true });
|
||||
mockFetchIssue.mockReturnValue(issue131);
|
||||
mockFormatIssueAsTask.mockReturnValue('## GitHub Issue #131: Issue #131');
|
||||
|
||||
// When
|
||||
await executeDefaultAction();
|
||||
|
||||
// Then: issue should be fetched
|
||||
expect(mockFetchIssue).toHaveBeenCalledWith(131);
|
||||
|
||||
// Then: interactive mode should receive the formatted issue as initial input
|
||||
expect(mockInteractiveMode).toHaveBeenCalledWith(
|
||||
'/test/cwd',
|
||||
'## GitHub Issue #131: Issue #131',
|
||||
expect.anything(),
|
||||
);
|
||||
|
||||
// Then: selectAndExecuteTask should receive issues in options
|
||||
expect(mockSelectAndExecuteTask).toHaveBeenCalledWith(
|
||||
'/test/cwd',
|
||||
'summarized task',
|
||||
expect.objectContaining({
|
||||
issues: [issue131],
|
||||
}),
|
||||
undefined,
|
||||
);
|
||||
});
|
||||
|
||||
it('should exit with error when gh CLI is unavailable for --issue', async () => {
|
||||
// Given
|
||||
mockOpts.issue = 131;
|
||||
mockCheckGhCli.mockReturnValue({
|
||||
available: false,
|
||||
error: 'gh CLI is not installed',
|
||||
});
|
||||
|
||||
const mockExit = vi.spyOn(process, 'exit').mockImplementation(() => {
|
||||
throw new Error('process.exit called');
|
||||
});
|
||||
|
||||
// When / Then
|
||||
await expect(executeDefaultAction()).rejects.toThrow('process.exit called');
|
||||
expect(mockExit).toHaveBeenCalledWith(1);
|
||||
expect(mockInteractiveMode).not.toHaveBeenCalled();
|
||||
|
||||
mockExit.mockRestore();
|
||||
});
|
||||
});
|
||||
|
||||
describe('#N positional argument', () => {
|
||||
it('should resolve issue reference and pass to interactive mode', async () => {
|
||||
// Given
|
||||
const issue131 = createMockIssue(131);
|
||||
mockIsDirectTask.mockReturnValue(true);
|
||||
mockCheckGhCli.mockReturnValue({ available: true });
|
||||
mockFetchIssue.mockReturnValue(issue131);
|
||||
mockFormatIssueAsTask.mockReturnValue('## GitHub Issue #131: Issue #131');
|
||||
mockParseIssueNumbers.mockReturnValue([131]);
|
||||
|
||||
// When
|
||||
await executeDefaultAction('#131');
|
||||
|
||||
// Then: interactive mode should be entered with formatted issue
|
||||
expect(mockInteractiveMode).toHaveBeenCalledWith(
|
||||
'/test/cwd',
|
||||
'## GitHub Issue #131: Issue #131',
|
||||
expect.anything(),
|
||||
);
|
||||
|
||||
// Then: selectAndExecuteTask should receive issues
|
||||
expect(mockSelectAndExecuteTask).toHaveBeenCalledWith(
|
||||
'/test/cwd',
|
||||
'summarized task',
|
||||
expect.objectContaining({
|
||||
issues: [issue131],
|
||||
}),
|
||||
undefined,
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('non-issue input', () => {
|
||||
it('should pass regular text input to interactive mode without issues', async () => {
|
||||
// When
|
||||
await executeDefaultAction('refactor the code');
|
||||
|
||||
// Then: interactive mode should receive the original text
|
||||
expect(mockInteractiveMode).toHaveBeenCalledWith(
|
||||
'/test/cwd',
|
||||
'refactor the code',
|
||||
expect.anything(),
|
||||
);
|
||||
|
||||
// Then: no issue fetching should occur
|
||||
expect(mockFetchIssue).not.toHaveBeenCalled();
|
||||
|
||||
// Then: selectAndExecuteTask should be called without issues
|
||||
const callArgs = mockSelectAndExecuteTask.mock.calls[0];
|
||||
expect(callArgs?.[2]?.issues).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should enter interactive mode with no input when no args provided', async () => {
|
||||
// When
|
||||
await executeDefaultAction();
|
||||
|
||||
// Then: interactive mode should be entered with undefined input
|
||||
expect(mockInteractiveMode).toHaveBeenCalledWith(
|
||||
'/test/cwd',
|
||||
undefined,
|
||||
expect.anything(),
|
||||
);
|
||||
|
||||
// Then: no issue fetching should occur
|
||||
expect(mockFetchIssue).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('interactive mode cancel', () => {
|
||||
it('should not call selectAndExecuteTask when interactive mode is cancelled', async () => {
|
||||
// Given
|
||||
mockOpts.issue = 131;
|
||||
const issue131 = createMockIssue(131);
|
||||
mockCheckGhCli.mockReturnValue({ available: true });
|
||||
mockFetchIssue.mockReturnValue(issue131);
|
||||
mockFormatIssueAsTask.mockReturnValue('## GitHub Issue #131');
|
||||
mockInteractiveMode.mockResolvedValue({ action: 'cancel', task: '' });
|
||||
|
||||
// When
|
||||
await executeDefaultAction();
|
||||
|
||||
// Then
|
||||
expect(mockSelectAndExecuteTask).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
});
|
||||
@ -10,6 +10,11 @@ vi.mock('../shared/prompt/index.js', () => ({
|
||||
selectOptionWithDefault: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../infra/task/git.js', () => ({
|
||||
stageAndCommit: vi.fn(),
|
||||
getCurrentBranch: vi.fn(() => 'main'),
|
||||
}));
|
||||
|
||||
vi.mock('../infra/task/clone.js', () => ({
|
||||
createSharedClone: vi.fn(),
|
||||
removeClone: vi.fn(),
|
||||
|
||||
128
src/__tests__/eject-facet.test.ts
Normal file
128
src/__tests__/eject-facet.test.ts
Normal file
@ -0,0 +1,128 @@
|
||||
/**
|
||||
* Tests for ejectFacet function.
|
||||
*
|
||||
* Covers:
|
||||
* - Normal copy from builtin to project layer
|
||||
* - Normal copy from builtin to global layer (--global)
|
||||
* - Skip when facet already exists at destination
|
||||
* - Error and listing when facet not found in builtins
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||
import { existsSync, readFileSync, mkdtempSync, mkdirSync, writeFileSync, rmSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
|
||||
// vi.hoisted runs before vi.mock hoisting — safe for shared state
|
||||
const mocks = vi.hoisted(() => {
|
||||
let builtinDir = '';
|
||||
let projectFacetDir = '';
|
||||
let globalFacetDir = '';
|
||||
|
||||
return {
|
||||
get builtinDir() { return builtinDir; },
|
||||
set builtinDir(v: string) { builtinDir = v; },
|
||||
get projectFacetDir() { return projectFacetDir; },
|
||||
set projectFacetDir(v: string) { projectFacetDir = v; },
|
||||
get globalFacetDir() { return globalFacetDir; },
|
||||
set globalFacetDir(v: string) { globalFacetDir = v; },
|
||||
ui: {
|
||||
header: vi.fn(),
|
||||
success: vi.fn(),
|
||||
info: vi.fn(),
|
||||
warn: vi.fn(),
|
||||
error: vi.fn(),
|
||||
blankLine: vi.fn(),
|
||||
},
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock('../infra/config/index.js', () => ({
|
||||
getLanguage: () => 'en' as const,
|
||||
getBuiltinFacetDir: () => mocks.builtinDir,
|
||||
getProjectFacetDir: () => mocks.projectFacetDir,
|
||||
getGlobalFacetDir: () => mocks.globalFacetDir,
|
||||
getGlobalPiecesDir: vi.fn(),
|
||||
getProjectPiecesDir: vi.fn(),
|
||||
getBuiltinPiecesDir: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/ui/index.js', () => mocks.ui);
|
||||
|
||||
import { ejectFacet } from '../features/config/ejectBuiltin.js';
|
||||
|
||||
function createTestDirs() {
|
||||
const baseDir = mkdtempSync(join(tmpdir(), 'takt-eject-facet-test-'));
|
||||
const builtinDir = join(baseDir, 'builtins', 'personas');
|
||||
const projectDir = join(baseDir, 'project');
|
||||
const globalDir = join(baseDir, 'global');
|
||||
|
||||
mkdirSync(builtinDir, { recursive: true });
|
||||
mkdirSync(projectDir, { recursive: true });
|
||||
mkdirSync(globalDir, { recursive: true });
|
||||
|
||||
writeFileSync(join(builtinDir, 'coder.md'), '# Coder Persona\nYou are a coder.');
|
||||
writeFileSync(join(builtinDir, 'planner.md'), '# Planner Persona\nYou are a planner.');
|
||||
|
||||
return {
|
||||
baseDir,
|
||||
builtinDir,
|
||||
projectDir,
|
||||
globalDir,
|
||||
cleanup: () => rmSync(baseDir, { recursive: true, force: true }),
|
||||
};
|
||||
}
|
||||
|
||||
describe('ejectFacet', () => {
|
||||
let dirs: ReturnType<typeof createTestDirs>;
|
||||
|
||||
beforeEach(() => {
|
||||
dirs = createTestDirs();
|
||||
mocks.builtinDir = dirs.builtinDir;
|
||||
mocks.projectFacetDir = join(dirs.projectDir, '.takt', 'personas');
|
||||
mocks.globalFacetDir = join(dirs.globalDir, 'personas');
|
||||
|
||||
Object.values(mocks.ui).forEach((fn) => fn.mockClear());
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
dirs.cleanup();
|
||||
});
|
||||
|
||||
it('should copy builtin facet to project .takt/{type}/', async () => {
|
||||
await ejectFacet('personas', 'coder', { projectDir: dirs.projectDir });
|
||||
|
||||
const destPath = join(dirs.projectDir, '.takt', 'personas', 'coder.md');
|
||||
expect(existsSync(destPath)).toBe(true);
|
||||
expect(readFileSync(destPath, 'utf-8')).toBe('# Coder Persona\nYou are a coder.');
|
||||
expect(mocks.ui.success).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should copy builtin facet to global ~/.takt/{type}/ with --global', async () => {
|
||||
await ejectFacet('personas', 'coder', { global: true, projectDir: dirs.projectDir });
|
||||
|
||||
const destPath = join(dirs.globalDir, 'personas', 'coder.md');
|
||||
expect(existsSync(destPath)).toBe(true);
|
||||
expect(readFileSync(destPath, 'utf-8')).toBe('# Coder Persona\nYou are a coder.');
|
||||
expect(mocks.ui.success).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should skip if facet already exists at destination', async () => {
|
||||
const destDir = join(dirs.projectDir, '.takt', 'personas');
|
||||
mkdirSync(destDir, { recursive: true });
|
||||
writeFileSync(join(destDir, 'coder.md'), 'Custom coder content');
|
||||
|
||||
await ejectFacet('personas', 'coder', { projectDir: dirs.projectDir });
|
||||
|
||||
// File should NOT be overwritten
|
||||
expect(readFileSync(join(destDir, 'coder.md'), 'utf-8')).toBe('Custom coder content');
|
||||
expect(mocks.ui.warn).toHaveBeenCalledWith(expect.stringContaining('Already exists'));
|
||||
});
|
||||
|
||||
it('should show error and list available facets when not found', async () => {
|
||||
await ejectFacet('personas', 'nonexistent', { projectDir: dirs.projectDir });
|
||||
|
||||
expect(mocks.ui.error).toHaveBeenCalledWith(expect.stringContaining('not found'));
|
||||
expect(mocks.ui.info).toHaveBeenCalledWith(expect.stringContaining('Available'));
|
||||
});
|
||||
});
|
||||
202
src/__tests__/engine-parallel-failure.test.ts
Normal file
202
src/__tests__/engine-parallel-failure.test.ts
Normal file
@ -0,0 +1,202 @@
|
||||
/**
|
||||
* PieceEngine integration tests: parallel movement partial failure handling.
|
||||
*
|
||||
* Covers:
|
||||
* - One sub-movement fails while another succeeds → piece continues
|
||||
* - All sub-movements fail → piece aborts
|
||||
* - Failed sub-movement is recorded as blocked with error
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||
import { existsSync, rmSync } from 'node:fs';
|
||||
|
||||
// --- Mock setup (must be before imports that use these modules) ---
|
||||
|
||||
vi.mock('../agents/runner.js', () => ({
|
||||
runAgent: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../core/piece/evaluation/index.js', () => ({
|
||||
detectMatchedRule: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../core/piece/phase-runner.js', () => ({
|
||||
needsStatusJudgmentPhase: vi.fn().mockReturnValue(false),
|
||||
runReportPhase: vi.fn().mockResolvedValue(undefined),
|
||||
runStatusJudgmentPhase: vi.fn().mockResolvedValue(''),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
||||
...(await importOriginal<Record<string, unknown>>()),
|
||||
generateReportDir: vi.fn().mockReturnValue('test-report-dir'),
|
||||
}));
|
||||
|
||||
// --- Imports (after mocks) ---
|
||||
|
||||
import { PieceEngine } from '../core/piece/index.js';
|
||||
import { runAgent } from '../agents/runner.js';
|
||||
import { detectMatchedRule } from '../core/piece/index.js';
|
||||
import {
|
||||
makeResponse,
|
||||
makeMovement,
|
||||
makeRule,
|
||||
mockDetectMatchedRuleSequence,
|
||||
createTestTmpDir,
|
||||
applyDefaultMocks,
|
||||
} from './engine-test-helpers.js';
|
||||
import type { PieceConfig } from '../core/models/index.js';
|
||||
|
||||
/**
|
||||
* Build a piece config that goes directly to a parallel step:
|
||||
* parallel-step (arch-review + security-review) → done
|
||||
*/
|
||||
function buildParallelOnlyConfig(): PieceConfig {
|
||||
return {
|
||||
name: 'test-parallel-failure',
|
||||
description: 'Test parallel failure handling',
|
||||
maxIterations: 10,
|
||||
initialMovement: 'reviewers',
|
||||
movements: [
|
||||
makeMovement('reviewers', {
|
||||
parallel: [
|
||||
makeMovement('arch-review', {
|
||||
rules: [
|
||||
makeRule('done', 'COMPLETE'),
|
||||
makeRule('needs_fix', 'fix'),
|
||||
],
|
||||
}),
|
||||
makeMovement('security-review', {
|
||||
rules: [
|
||||
makeRule('done', 'COMPLETE'),
|
||||
makeRule('needs_fix', 'fix'),
|
||||
],
|
||||
}),
|
||||
],
|
||||
rules: [
|
||||
makeRule('any("done")', 'done', {
|
||||
isAggregateCondition: true,
|
||||
aggregateType: 'any',
|
||||
aggregateConditionText: 'done',
|
||||
}),
|
||||
makeRule('all("needs_fix")', 'fix', {
|
||||
isAggregateCondition: true,
|
||||
aggregateType: 'all',
|
||||
aggregateConditionText: 'needs_fix',
|
||||
}),
|
||||
],
|
||||
}),
|
||||
makeMovement('done', {
|
||||
rules: [
|
||||
makeRule('completed', 'COMPLETE'),
|
||||
],
|
||||
}),
|
||||
makeMovement('fix', {
|
||||
rules: [
|
||||
makeRule('fixed', 'reviewers'),
|
||||
],
|
||||
}),
|
||||
],
|
||||
};
|
||||
}
|
||||
|
||||
describe('PieceEngine Integration: Parallel Movement Partial Failure', () => {
|
||||
let tmpDir: string;
|
||||
|
||||
beforeEach(() => {
|
||||
vi.resetAllMocks();
|
||||
applyDefaultMocks();
|
||||
tmpDir = createTestTmpDir();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
if (existsSync(tmpDir)) {
|
||||
rmSync(tmpDir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
it('should continue when one sub-movement fails but another succeeds', async () => {
|
||||
const config = buildParallelOnlyConfig();
|
||||
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||
|
||||
const mock = vi.mocked(runAgent);
|
||||
// arch-review fails (exit code 1)
|
||||
mock.mockRejectedValueOnce(new Error('Claude Code process exited with code 1'));
|
||||
// security-review succeeds
|
||||
mock.mockResolvedValueOnce(
|
||||
makeResponse({ persona: 'security-review', content: 'Security review passed' }),
|
||||
);
|
||||
// done step
|
||||
mock.mockResolvedValueOnce(
|
||||
makeResponse({ persona: 'done', content: 'Completed' }),
|
||||
);
|
||||
|
||||
mockDetectMatchedRuleSequence([
|
||||
// security-review sub-movement rule match (arch-review has no match — it failed)
|
||||
{ index: 0, method: 'phase1_tag' }, // security-review → done
|
||||
{ index: 0, method: 'aggregate' }, // reviewers → any("done") matches
|
||||
{ index: 0, method: 'phase1_tag' }, // done → COMPLETE
|
||||
]);
|
||||
|
||||
const state = await engine.run();
|
||||
|
||||
expect(state.status).toBe('completed');
|
||||
|
||||
// arch-review should be recorded as blocked
|
||||
const archReviewOutput = state.movementOutputs.get('arch-review');
|
||||
expect(archReviewOutput).toBeDefined();
|
||||
expect(archReviewOutput!.status).toBe('blocked');
|
||||
expect(archReviewOutput!.error).toContain('exit');
|
||||
|
||||
// security-review should be recorded as done
|
||||
const securityReviewOutput = state.movementOutputs.get('security-review');
|
||||
expect(securityReviewOutput).toBeDefined();
|
||||
expect(securityReviewOutput!.status).toBe('done');
|
||||
});
|
||||
|
||||
it('should abort when all sub-movements fail', async () => {
|
||||
const config = buildParallelOnlyConfig();
|
||||
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||
|
||||
const mock = vi.mocked(runAgent);
|
||||
// Both fail
|
||||
mock.mockRejectedValueOnce(new Error('Claude Code process exited with code 1'));
|
||||
mock.mockRejectedValueOnce(new Error('Claude Code process exited with code 1'));
|
||||
|
||||
const abortFn = vi.fn();
|
||||
engine.on('piece:abort', abortFn);
|
||||
|
||||
const state = await engine.run();
|
||||
|
||||
expect(state.status).toBe('aborted');
|
||||
expect(abortFn).toHaveBeenCalledOnce();
|
||||
const reason = abortFn.mock.calls[0]![1] as string;
|
||||
expect(reason).toContain('All parallel sub-movements failed');
|
||||
});
|
||||
|
||||
it('should record failed sub-movement error message in movementOutputs', async () => {
|
||||
const config = buildParallelOnlyConfig();
|
||||
const engine = new PieceEngine(config, tmpDir, 'test task', { projectCwd: tmpDir });
|
||||
|
||||
const mock = vi.mocked(runAgent);
|
||||
mock.mockRejectedValueOnce(new Error('Session resume failed'));
|
||||
mock.mockResolvedValueOnce(
|
||||
makeResponse({ persona: 'security-review', content: 'OK' }),
|
||||
);
|
||||
mock.mockResolvedValueOnce(
|
||||
makeResponse({ persona: 'done', content: 'Done' }),
|
||||
);
|
||||
|
||||
mockDetectMatchedRuleSequence([
|
||||
{ index: 0, method: 'phase1_tag' },
|
||||
{ index: 0, method: 'aggregate' },
|
||||
{ index: 0, method: 'phase1_tag' },
|
||||
]);
|
||||
|
||||
const state = await engine.run();
|
||||
|
||||
const archReviewOutput = state.movementOutputs.get('arch-review');
|
||||
expect(archReviewOutput).toBeDefined();
|
||||
expect(archReviewOutput!.error).toBe('Session resume failed');
|
||||
expect(archReviewOutput!.content).toBe('');
|
||||
});
|
||||
});
|
||||
29
src/__tests__/executor-stderr.test.ts
Normal file
29
src/__tests__/executor-stderr.test.ts
Normal file
@ -0,0 +1,29 @@
|
||||
/**
|
||||
* Tests for QueryExecutor stderr capture and SdkOptionsBuilder stderr passthrough.
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import { buildSdkOptions } from '../infra/claude/options-builder.js';
|
||||
import type { ClaudeSpawnOptions } from '../infra/claude/types.js';
|
||||
|
||||
describe('SdkOptionsBuilder.build() — stderr', () => {
|
||||
it('should include stderr callback in SDK options when onStderr is provided', () => {
|
||||
const stderrHandler = (_data: string): void => {};
|
||||
const spawnOptions: ClaudeSpawnOptions = {
|
||||
cwd: '/tmp/test',
|
||||
onStderr: stderrHandler,
|
||||
};
|
||||
|
||||
const sdkOptions = buildSdkOptions(spawnOptions);
|
||||
expect(sdkOptions.stderr).toBe(stderrHandler);
|
||||
});
|
||||
|
||||
it('should not include stderr in SDK options when onStderr is not provided', () => {
|
||||
const spawnOptions: ClaudeSpawnOptions = {
|
||||
cwd: '/tmp/test',
|
||||
};
|
||||
|
||||
const sdkOptions = buildSdkOptions(spawnOptions);
|
||||
expect(sdkOptions).not.toHaveProperty('stderr');
|
||||
});
|
||||
});
|
||||
496
src/__tests__/facet-resolution.test.ts
Normal file
496
src/__tests__/facet-resolution.test.ts
Normal file
@ -0,0 +1,496 @@
|
||||
/**
|
||||
* Tests for name-based facet resolution (layer system).
|
||||
*
|
||||
* Covers:
|
||||
* - isResourcePath() helper
|
||||
* - resolveFacetByName() 3-layer resolution (project → user → builtin)
|
||||
* - resolveRefToContent() with facetType and context
|
||||
* - resolvePersona() with context (name-based resolution)
|
||||
* - parseFacetType() CLI mapping
|
||||
* - Facet directory path helpers
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { mkdtempSync, writeFileSync, mkdirSync, rmSync, readFileSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import {
|
||||
isResourcePath,
|
||||
resolveFacetByName,
|
||||
resolveRefToContent,
|
||||
resolveRefList,
|
||||
resolvePersona,
|
||||
type FacetResolutionContext,
|
||||
type PieceSections,
|
||||
} from '../infra/config/loaders/resource-resolver.js';
|
||||
import {
|
||||
getProjectFacetDir,
|
||||
getGlobalFacetDir,
|
||||
getBuiltinFacetDir,
|
||||
type FacetType,
|
||||
} from '../infra/config/paths.js';
|
||||
import { parseFacetType, VALID_FACET_TYPES } from '../features/config/ejectBuiltin.js';
|
||||
import { normalizePieceConfig } from '../infra/config/loaders/pieceParser.js';
|
||||
|
||||
describe('isResourcePath', () => {
|
||||
it('should return true for relative paths starting with ./', () => {
|
||||
expect(isResourcePath('./personas/coder.md')).toBe(true);
|
||||
});
|
||||
|
||||
it('should return true for relative paths starting with ../', () => {
|
||||
expect(isResourcePath('../personas/coder.md')).toBe(true);
|
||||
});
|
||||
|
||||
it('should return true for absolute paths', () => {
|
||||
expect(isResourcePath('/home/user/coder.md')).toBe(true);
|
||||
});
|
||||
|
||||
it('should return true for home directory paths', () => {
|
||||
expect(isResourcePath('~/coder.md')).toBe(true);
|
||||
});
|
||||
|
||||
it('should return true for paths ending with .md', () => {
|
||||
expect(isResourcePath('coder.md')).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false for plain names', () => {
|
||||
expect(isResourcePath('coder')).toBe(false);
|
||||
expect(isResourcePath('architecture-reviewer')).toBe(false);
|
||||
expect(isResourcePath('coding')).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('resolveFacetByName', () => {
|
||||
let tempDir: string;
|
||||
let projectDir: string;
|
||||
let context: FacetResolutionContext;
|
||||
|
||||
beforeEach(() => {
|
||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-facet-test-'));
|
||||
projectDir = join(tempDir, 'project');
|
||||
mkdirSync(projectDir, { recursive: true });
|
||||
context = { projectDir, lang: 'ja' };
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(tempDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('should resolve from builtin when no project/user override exists', () => {
|
||||
// Builtin personas exist in the real builtins directory
|
||||
const content = resolveFacetByName('coder', 'personas', context);
|
||||
expect(content).toBeDefined();
|
||||
expect(content).toContain(''); // Just verify it returns something
|
||||
});
|
||||
|
||||
it('should resolve from project layer over builtin', () => {
|
||||
const projectPersonasDir = join(projectDir, '.takt', 'personas');
|
||||
mkdirSync(projectPersonasDir, { recursive: true });
|
||||
writeFileSync(join(projectPersonasDir, 'coder.md'), 'Project-level coder persona');
|
||||
|
||||
const content = resolveFacetByName('coder', 'personas', context);
|
||||
expect(content).toBe('Project-level coder persona');
|
||||
});
|
||||
|
||||
it('should return undefined when facet not found in any layer', () => {
|
||||
const content = resolveFacetByName('nonexistent-facet-xyz', 'personas', context);
|
||||
expect(content).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should resolve different facet types', () => {
|
||||
const projectPoliciesDir = join(projectDir, '.takt', 'policies');
|
||||
mkdirSync(projectPoliciesDir, { recursive: true });
|
||||
writeFileSync(join(projectPoliciesDir, 'custom-policy.md'), 'Custom policy content');
|
||||
|
||||
const content = resolveFacetByName('custom-policy', 'policies', context);
|
||||
expect(content).toBe('Custom policy content');
|
||||
});
|
||||
|
||||
it('should try project before builtin', () => {
|
||||
// Create project override
|
||||
const projectPersonasDir = join(projectDir, '.takt', 'personas');
|
||||
mkdirSync(projectPersonasDir, { recursive: true });
|
||||
writeFileSync(join(projectPersonasDir, 'coder.md'), 'OVERRIDE');
|
||||
|
||||
const content = resolveFacetByName('coder', 'personas', context);
|
||||
expect(content).toBe('OVERRIDE');
|
||||
});
|
||||
});
|
||||
|
||||
describe('resolveRefToContent with layer resolution', () => {
|
||||
let tempDir: string;
|
||||
let context: FacetResolutionContext;
|
||||
|
||||
beforeEach(() => {
|
||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-ref-test-'));
|
||||
context = { projectDir: tempDir, lang: 'ja' };
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(tempDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('should prefer resolvedMap over layer resolution', () => {
|
||||
const resolvedMap = { 'coding': 'Map content for coding' };
|
||||
const content = resolveRefToContent('coding', resolvedMap, tempDir, 'policies', context);
|
||||
expect(content).toBe('Map content for coding');
|
||||
});
|
||||
|
||||
it('should use layer resolution for name refs when not in resolvedMap', () => {
|
||||
const policiesDir = join(tempDir, '.takt', 'policies');
|
||||
mkdirSync(policiesDir, { recursive: true });
|
||||
writeFileSync(join(policiesDir, 'coding.md'), 'Project coding policy');
|
||||
|
||||
const content = resolveRefToContent('coding', undefined, tempDir, 'policies', context);
|
||||
expect(content).toBe('Project coding policy');
|
||||
});
|
||||
|
||||
it('should use path resolution for path-like refs', () => {
|
||||
const policyFile = join(tempDir, 'my-policy.md');
|
||||
writeFileSync(policyFile, 'Inline policy');
|
||||
|
||||
const content = resolveRefToContent('./my-policy.md', undefined, tempDir);
|
||||
expect(content).toBe('Inline policy');
|
||||
});
|
||||
|
||||
it('should fall back to path resolution when no context', () => {
|
||||
const content = resolveRefToContent('some-name', undefined, tempDir);
|
||||
// No context, no file — returns the spec as-is (inline content behavior)
|
||||
expect(content).toBe('some-name');
|
||||
});
|
||||
});
|
||||
|
||||
describe('resolveRefList with layer resolution', () => {
|
||||
let tempDir: string;
|
||||
let context: FacetResolutionContext;
|
||||
|
||||
beforeEach(() => {
|
||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-reflist-test-'));
|
||||
context = { projectDir: tempDir, lang: 'ja' };
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(tempDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('should resolve array of name refs via layer resolution', () => {
|
||||
const policiesDir = join(tempDir, '.takt', 'policies');
|
||||
mkdirSync(policiesDir, { recursive: true });
|
||||
writeFileSync(join(policiesDir, 'policy-a.md'), 'Policy A content');
|
||||
writeFileSync(join(policiesDir, 'policy-b.md'), 'Policy B content');
|
||||
|
||||
const result = resolveRefList(
|
||||
['policy-a', 'policy-b'],
|
||||
undefined,
|
||||
tempDir,
|
||||
'policies',
|
||||
context,
|
||||
);
|
||||
|
||||
expect(result).toEqual(['Policy A content', 'Policy B content']);
|
||||
});
|
||||
|
||||
it('should handle mixed array of name refs and path refs', () => {
|
||||
const policiesDir = join(tempDir, '.takt', 'policies');
|
||||
mkdirSync(policiesDir, { recursive: true });
|
||||
writeFileSync(join(policiesDir, 'name-policy.md'), 'Name-resolved policy');
|
||||
|
||||
const pathFile = join(tempDir, 'local-policy.md');
|
||||
writeFileSync(pathFile, 'Path-resolved policy');
|
||||
|
||||
const result = resolveRefList(
|
||||
['name-policy', './local-policy.md'],
|
||||
undefined,
|
||||
tempDir,
|
||||
'policies',
|
||||
context,
|
||||
);
|
||||
|
||||
expect(result).toEqual(['Name-resolved policy', 'Path-resolved policy']);
|
||||
});
|
||||
|
||||
it('should return undefined for undefined input', () => {
|
||||
const result = resolveRefList(undefined, undefined, tempDir, 'policies', context);
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should handle single string ref (not array)', () => {
|
||||
const policiesDir = join(tempDir, '.takt', 'policies');
|
||||
mkdirSync(policiesDir, { recursive: true });
|
||||
writeFileSync(join(policiesDir, 'single.md'), 'Single policy');
|
||||
|
||||
const result = resolveRefList(
|
||||
'single',
|
||||
undefined,
|
||||
tempDir,
|
||||
'policies',
|
||||
context,
|
||||
);
|
||||
|
||||
expect(result).toEqual(['Single policy']);
|
||||
});
|
||||
|
||||
it('should prefer resolvedMap over layer resolution', () => {
|
||||
const resolvedMap = { coding: 'Map content for coding' };
|
||||
const result = resolveRefList(
|
||||
['coding'],
|
||||
resolvedMap,
|
||||
tempDir,
|
||||
'policies',
|
||||
context,
|
||||
);
|
||||
|
||||
expect(result).toEqual(['Map content for coding']);
|
||||
});
|
||||
});
|
||||
|
||||
describe('resolvePersona with layer resolution', () => {
|
||||
let tempDir: string;
|
||||
let projectDir: string;
|
||||
let context: FacetResolutionContext;
|
||||
const emptySections: PieceSections = {};
|
||||
|
||||
beforeEach(() => {
|
||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-persona-test-'));
|
||||
projectDir = join(tempDir, 'project');
|
||||
mkdirSync(projectDir, { recursive: true });
|
||||
context = { projectDir, lang: 'ja' };
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(tempDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('should resolve persona by name from builtin', () => {
|
||||
const result = resolvePersona('coder', emptySections, tempDir, context);
|
||||
expect(result.personaSpec).toBe('coder');
|
||||
expect(result.personaPath).toBeDefined();
|
||||
expect(result.personaPath).toContain('coder.md');
|
||||
});
|
||||
|
||||
it('should resolve persona from project layer', () => {
|
||||
const projectPersonasDir = join(projectDir, '.takt', 'personas');
|
||||
mkdirSync(projectPersonasDir, { recursive: true });
|
||||
const personaPath = join(projectPersonasDir, 'custom-persona.md');
|
||||
writeFileSync(personaPath, 'Custom persona content');
|
||||
|
||||
const result = resolvePersona('custom-persona', emptySections, tempDir, context);
|
||||
expect(result.personaSpec).toBe('custom-persona');
|
||||
expect(result.personaPath).toBe(personaPath);
|
||||
});
|
||||
|
||||
it('should prefer section map over layer resolution', () => {
|
||||
const personaFile = join(tempDir, 'explicit.md');
|
||||
writeFileSync(personaFile, 'Explicit persona');
|
||||
|
||||
const sections: PieceSections = {
|
||||
personas: { 'my-persona': './explicit.md' },
|
||||
};
|
||||
|
||||
const result = resolvePersona('my-persona', sections, tempDir, context);
|
||||
expect(result.personaSpec).toBe('./explicit.md');
|
||||
expect(result.personaPath).toBe(personaFile);
|
||||
});
|
||||
|
||||
it('should handle path-like persona specs directly', () => {
|
||||
const personaFile = join(tempDir, 'personas', 'coder.md');
|
||||
mkdirSync(join(tempDir, 'personas'), { recursive: true });
|
||||
writeFileSync(personaFile, 'Path persona');
|
||||
|
||||
const result = resolvePersona('../personas/coder.md', emptySections, tempDir);
|
||||
// Path-like spec should be resolved as resource path, not name
|
||||
expect(result.personaSpec).toBe('../personas/coder.md');
|
||||
});
|
||||
|
||||
it('should return empty for undefined persona', () => {
|
||||
const result = resolvePersona(undefined, emptySections, tempDir, context);
|
||||
expect(result).toEqual({});
|
||||
});
|
||||
});
|
||||
|
||||
describe('facet directory path helpers', () => {
|
||||
it('getProjectFacetDir should return .takt/{type}/ path', () => {
|
||||
const dir = getProjectFacetDir('/my/project', 'personas');
|
||||
expect(dir).toContain('.takt');
|
||||
expect(dir).toContain('personas');
|
||||
});
|
||||
|
||||
it('getGlobalFacetDir should return path with facet type', () => {
|
||||
const dir = getGlobalFacetDir('policies');
|
||||
expect(dir).toContain('policies');
|
||||
});
|
||||
|
||||
it('getBuiltinFacetDir should return path with lang and facet type', () => {
|
||||
const dir = getBuiltinFacetDir('ja', 'knowledge');
|
||||
expect(dir).toContain('ja');
|
||||
expect(dir).toContain('knowledge');
|
||||
});
|
||||
|
||||
it('should work with all facet types', () => {
|
||||
const types: FacetType[] = ['personas', 'policies', 'knowledge', 'instructions', 'output-contracts'];
|
||||
for (const t of types) {
|
||||
expect(getProjectFacetDir('/proj', t)).toContain(t);
|
||||
expect(getGlobalFacetDir(t)).toContain(t);
|
||||
expect(getBuiltinFacetDir('en', t)).toContain(t);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
describe('parseFacetType', () => {
|
||||
it('should map singular to plural facet types', () => {
|
||||
expect(parseFacetType('persona')).toBe('personas');
|
||||
expect(parseFacetType('policy')).toBe('policies');
|
||||
expect(parseFacetType('knowledge')).toBe('knowledge');
|
||||
expect(parseFacetType('instruction')).toBe('instructions');
|
||||
expect(parseFacetType('output-contract')).toBe('output-contracts');
|
||||
});
|
||||
|
||||
it('should return undefined for invalid facet types', () => {
|
||||
expect(parseFacetType('invalid')).toBeUndefined();
|
||||
expect(parseFacetType('personas')).toBeUndefined();
|
||||
expect(parseFacetType('')).toBeUndefined();
|
||||
});
|
||||
|
||||
it('VALID_FACET_TYPES should contain all singular forms', () => {
|
||||
expect(VALID_FACET_TYPES).toContain('persona');
|
||||
expect(VALID_FACET_TYPES).toContain('policy');
|
||||
expect(VALID_FACET_TYPES).toContain('knowledge');
|
||||
expect(VALID_FACET_TYPES).toContain('instruction');
|
||||
expect(VALID_FACET_TYPES).toContain('output-contract');
|
||||
expect(VALID_FACET_TYPES).toHaveLength(5);
|
||||
});
|
||||
});
|
||||
|
||||
describe('normalizePieceConfig with layer resolution', () => {
|
||||
let tempDir: string;
|
||||
let pieceDir: string;
|
||||
let projectDir: string;
|
||||
|
||||
beforeEach(() => {
|
||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-normalize-test-'));
|
||||
pieceDir = join(tempDir, 'pieces');
|
||||
projectDir = join(tempDir, 'project');
|
||||
mkdirSync(pieceDir, { recursive: true });
|
||||
mkdirSync(projectDir, { recursive: true });
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(tempDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('should resolve persona by name when section map is absent and context provided', () => {
|
||||
const raw = {
|
||||
name: 'test-piece',
|
||||
movements: [
|
||||
{
|
||||
name: 'step1',
|
||||
persona: 'coder',
|
||||
instruction: '{task}',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const context: FacetResolutionContext = { projectDir, lang: 'ja' };
|
||||
const config = normalizePieceConfig(raw, pieceDir, context);
|
||||
|
||||
expect(config.movements[0]!.persona).toBe('coder');
|
||||
// With context, it should find the builtin coder persona
|
||||
expect(config.movements[0]!.personaPath).toBeDefined();
|
||||
expect(config.movements[0]!.personaPath).toContain('coder.md');
|
||||
});
|
||||
|
||||
it('should resolve policy by name when section map is absent', () => {
|
||||
// Create project-level policy
|
||||
const policiesDir = join(projectDir, '.takt', 'policies');
|
||||
mkdirSync(policiesDir, { recursive: true });
|
||||
writeFileSync(join(policiesDir, 'custom-policy.md'), '# Custom Policy\nBe nice.');
|
||||
|
||||
const raw = {
|
||||
name: 'test-piece',
|
||||
movements: [
|
||||
{
|
||||
name: 'step1',
|
||||
persona: 'coder',
|
||||
policy: 'custom-policy',
|
||||
instruction: '{task}',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const context: FacetResolutionContext = { projectDir, lang: 'ja' };
|
||||
const config = normalizePieceConfig(raw, pieceDir, context);
|
||||
|
||||
expect(config.movements[0]!.policyContents).toBeDefined();
|
||||
expect(config.movements[0]!.policyContents![0]).toBe('# Custom Policy\nBe nice.');
|
||||
});
|
||||
|
||||
it('should prefer section map over layer resolution', () => {
|
||||
// Create section map entry
|
||||
const personaFile = join(pieceDir, 'my-coder.md');
|
||||
writeFileSync(personaFile, 'Section map coder');
|
||||
|
||||
const raw = {
|
||||
name: 'test-piece',
|
||||
personas: {
|
||||
coder: './my-coder.md',
|
||||
},
|
||||
movements: [
|
||||
{
|
||||
name: 'step1',
|
||||
persona: 'coder',
|
||||
instruction: '{task}',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const context: FacetResolutionContext = { projectDir, lang: 'ja' };
|
||||
const config = normalizePieceConfig(raw, pieceDir, context);
|
||||
|
||||
// Section map should be used, not layer resolution
|
||||
expect(config.movements[0]!.persona).toBe('./my-coder.md');
|
||||
expect(config.movements[0]!.personaPath).toBe(personaFile);
|
||||
});
|
||||
|
||||
it('should work without context (backward compatibility)', () => {
|
||||
const raw = {
|
||||
name: 'test-piece',
|
||||
movements: [
|
||||
{
|
||||
name: 'step1',
|
||||
persona: 'coder',
|
||||
instruction: '{task}',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
// No context — backward compatibility mode
|
||||
const config = normalizePieceConfig(raw, pieceDir);
|
||||
|
||||
// Without context, name 'coder' resolves as relative path from pieceDir
|
||||
expect(config.movements[0]!.persona).toBe('coder');
|
||||
});
|
||||
|
||||
it('should resolve knowledge by name from project layer', () => {
|
||||
const knowledgeDir = join(projectDir, '.takt', 'knowledge');
|
||||
mkdirSync(knowledgeDir, { recursive: true });
|
||||
writeFileSync(join(knowledgeDir, 'domain-kb.md'), '# Domain Knowledge');
|
||||
|
||||
const raw = {
|
||||
name: 'test-piece',
|
||||
movements: [
|
||||
{
|
||||
name: 'step1',
|
||||
persona: 'coder',
|
||||
knowledge: 'domain-kb',
|
||||
instruction: '{task}',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const context: FacetResolutionContext = { projectDir, lang: 'ja' };
|
||||
const config = normalizePieceConfig(raw, pieceDir, context);
|
||||
|
||||
expect(config.movements[0]!.knowledgeContents).toBeDefined();
|
||||
expect(config.movements[0]!.knowledgeContents![0]).toBe('# Domain Knowledge');
|
||||
});
|
||||
});
|
||||
139
src/__tests__/formatMovementPreviews.test.ts
Normal file
139
src/__tests__/formatMovementPreviews.test.ts
Normal file
@ -0,0 +1,139 @@
|
||||
/**
|
||||
* Tests for formatMovementPreviews
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import type { MovementPreview } from '../infra/config/loaders/pieceResolver.js';
|
||||
import { formatMovementPreviews } from '../features/interactive/interactive.js';
|
||||
|
||||
describe('formatMovementPreviews', () => {
|
||||
const basePreviews: MovementPreview[] = [
|
||||
{
|
||||
name: 'plan',
|
||||
personaDisplayName: 'Planner',
|
||||
personaContent: 'You are a planner.',
|
||||
instructionContent: 'Create a plan for {task}',
|
||||
allowedTools: ['Read', 'Glob', 'Grep'],
|
||||
canEdit: false,
|
||||
},
|
||||
{
|
||||
name: 'implement',
|
||||
personaDisplayName: 'Coder',
|
||||
personaContent: 'You are a coder.',
|
||||
instructionContent: 'Implement the plan.',
|
||||
allowedTools: ['Read', 'Edit', 'Bash'],
|
||||
canEdit: true,
|
||||
},
|
||||
];
|
||||
|
||||
it('should format previews with English labels', () => {
|
||||
const result = formatMovementPreviews(basePreviews, 'en');
|
||||
|
||||
expect(result).toContain('### 1. plan (Planner)');
|
||||
expect(result).toContain('**Persona:**');
|
||||
expect(result).toContain('You are a planner.');
|
||||
expect(result).toContain('**Instruction:**');
|
||||
expect(result).toContain('Create a plan for {task}');
|
||||
expect(result).toContain('**Tools:** Read, Glob, Grep');
|
||||
expect(result).toContain('**Edit:** No');
|
||||
|
||||
expect(result).toContain('### 2. implement (Coder)');
|
||||
expect(result).toContain('**Tools:** Read, Edit, Bash');
|
||||
expect(result).toContain('**Edit:** Yes');
|
||||
});
|
||||
|
||||
it('should format previews with Japanese labels', () => {
|
||||
const result = formatMovementPreviews(basePreviews, 'ja');
|
||||
|
||||
expect(result).toContain('### 1. plan (Planner)');
|
||||
expect(result).toContain('**ペルソナ:**');
|
||||
expect(result).toContain('**インストラクション:**');
|
||||
expect(result).toContain('**ツール:** Read, Glob, Grep');
|
||||
expect(result).toContain('**編集:** 不可');
|
||||
expect(result).toContain('**編集:** 可');
|
||||
});
|
||||
|
||||
it('should show "None" when no tools are allowed (English)', () => {
|
||||
const previews: MovementPreview[] = [
|
||||
{
|
||||
name: 'step',
|
||||
personaDisplayName: 'Agent',
|
||||
personaContent: 'Agent persona',
|
||||
instructionContent: 'Do something',
|
||||
allowedTools: [],
|
||||
canEdit: false,
|
||||
},
|
||||
];
|
||||
|
||||
const result = formatMovementPreviews(previews, 'en');
|
||||
|
||||
expect(result).toContain('**Tools:** None');
|
||||
});
|
||||
|
||||
it('should show "なし" when no tools are allowed (Japanese)', () => {
|
||||
const previews: MovementPreview[] = [
|
||||
{
|
||||
name: 'step',
|
||||
personaDisplayName: 'Agent',
|
||||
personaContent: 'Agent persona',
|
||||
instructionContent: 'Do something',
|
||||
allowedTools: [],
|
||||
canEdit: false,
|
||||
},
|
||||
];
|
||||
|
||||
const result = formatMovementPreviews(previews, 'ja');
|
||||
|
||||
expect(result).toContain('**ツール:** なし');
|
||||
});
|
||||
|
||||
it('should skip empty persona content', () => {
|
||||
const previews: MovementPreview[] = [
|
||||
{
|
||||
name: 'step',
|
||||
personaDisplayName: 'Agent',
|
||||
personaContent: '',
|
||||
instructionContent: 'Do something',
|
||||
allowedTools: [],
|
||||
canEdit: false,
|
||||
},
|
||||
];
|
||||
|
||||
const result = formatMovementPreviews(previews, 'en');
|
||||
|
||||
expect(result).not.toContain('**Persona:**');
|
||||
expect(result).toContain('**Instruction:**');
|
||||
});
|
||||
|
||||
it('should skip empty instruction content', () => {
|
||||
const previews: MovementPreview[] = [
|
||||
{
|
||||
name: 'step',
|
||||
personaDisplayName: 'Agent',
|
||||
personaContent: 'Some persona',
|
||||
instructionContent: '',
|
||||
allowedTools: [],
|
||||
canEdit: false,
|
||||
},
|
||||
];
|
||||
|
||||
const result = formatMovementPreviews(previews, 'en');
|
||||
|
||||
expect(result).toContain('**Persona:**');
|
||||
expect(result).not.toContain('**Instruction:**');
|
||||
});
|
||||
|
||||
it('should return empty string for empty array', () => {
|
||||
const result = formatMovementPreviews([], 'en');
|
||||
|
||||
expect(result).toBe('');
|
||||
});
|
||||
|
||||
it('should separate multiple previews with double newline', () => {
|
||||
const result = formatMovementPreviews(basePreviews, 'en');
|
||||
|
||||
// Two movements should be separated by \n\n
|
||||
const parts = result.split('\n\n### ');
|
||||
expect(parts.length).toBe(2);
|
||||
});
|
||||
});
|
||||
57
src/__tests__/getCurrentBranch.test.ts
Normal file
57
src/__tests__/getCurrentBranch.test.ts
Normal file
@ -0,0 +1,57 @@
|
||||
/**
|
||||
* Tests for getCurrentBranch
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
|
||||
vi.mock('node:child_process', () => ({
|
||||
execFileSync: vi.fn(),
|
||||
}));
|
||||
|
||||
const mockExecFileSync = vi.mocked(execFileSync);
|
||||
|
||||
import { getCurrentBranch } from '../infra/task/git.js';
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
describe('getCurrentBranch', () => {
|
||||
it('should return the current branch name', () => {
|
||||
// Given
|
||||
mockExecFileSync.mockReturnValue('feature/my-branch\n');
|
||||
|
||||
// When
|
||||
const result = getCurrentBranch('/project');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('feature/my-branch');
|
||||
expect(mockExecFileSync).toHaveBeenCalledWith(
|
||||
'git',
|
||||
['rev-parse', '--abbrev-ref', 'HEAD'],
|
||||
{ cwd: '/project', encoding: 'utf-8', stdio: 'pipe' },
|
||||
);
|
||||
});
|
||||
|
||||
it('should trim whitespace from output', () => {
|
||||
// Given
|
||||
mockExecFileSync.mockReturnValue(' main \n');
|
||||
|
||||
// When
|
||||
const result = getCurrentBranch('/project');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('main');
|
||||
});
|
||||
|
||||
it('should propagate errors from git', () => {
|
||||
// Given
|
||||
mockExecFileSync.mockImplementation(() => {
|
||||
throw new Error('not a git repository');
|
||||
});
|
||||
|
||||
// When / Then
|
||||
expect(() => getCurrentBranch('/not-a-repo')).toThrow('not a git repository');
|
||||
});
|
||||
});
|
||||
@ -241,6 +241,101 @@ describe('loadGlobalConfig', () => {
|
||||
expect(config.preventSleep).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should load notification_sound config from config.yaml', () => {
|
||||
const taktDir = join(testHomeDir, '.takt');
|
||||
mkdirSync(taktDir, { recursive: true });
|
||||
writeFileSync(
|
||||
getGlobalConfigPath(),
|
||||
'language: en\nnotification_sound: false\n',
|
||||
'utf-8',
|
||||
);
|
||||
|
||||
const config = loadGlobalConfig();
|
||||
expect(config.notificationSound).toBe(false);
|
||||
});
|
||||
|
||||
it('should save and reload notification_sound config', () => {
|
||||
const taktDir = join(testHomeDir, '.takt');
|
||||
mkdirSync(taktDir, { recursive: true });
|
||||
writeFileSync(getGlobalConfigPath(), 'language: en\n', 'utf-8');
|
||||
|
||||
const config = loadGlobalConfig();
|
||||
config.notificationSound = true;
|
||||
saveGlobalConfig(config);
|
||||
invalidateGlobalConfigCache();
|
||||
|
||||
const reloaded = loadGlobalConfig();
|
||||
expect(reloaded.notificationSound).toBe(true);
|
||||
});
|
||||
|
||||
it('should save notification_sound: false when explicitly set', () => {
|
||||
const taktDir = join(testHomeDir, '.takt');
|
||||
mkdirSync(taktDir, { recursive: true });
|
||||
writeFileSync(getGlobalConfigPath(), 'language: en\n', 'utf-8');
|
||||
|
||||
const config = loadGlobalConfig();
|
||||
config.notificationSound = false;
|
||||
saveGlobalConfig(config);
|
||||
invalidateGlobalConfigCache();
|
||||
|
||||
const reloaded = loadGlobalConfig();
|
||||
expect(reloaded.notificationSound).toBe(false);
|
||||
});
|
||||
|
||||
it('should have undefined notificationSound by default', () => {
|
||||
const config = loadGlobalConfig();
|
||||
expect(config.notificationSound).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should load interactive_preview_movements config from config.yaml', () => {
|
||||
const taktDir = join(testHomeDir, '.takt');
|
||||
mkdirSync(taktDir, { recursive: true });
|
||||
writeFileSync(
|
||||
getGlobalConfigPath(),
|
||||
'language: en\ninteractive_preview_movements: 5\n',
|
||||
'utf-8',
|
||||
);
|
||||
|
||||
const config = loadGlobalConfig();
|
||||
expect(config.interactivePreviewMovements).toBe(5);
|
||||
});
|
||||
|
||||
it('should save and reload interactive_preview_movements config', () => {
|
||||
const taktDir = join(testHomeDir, '.takt');
|
||||
mkdirSync(taktDir, { recursive: true });
|
||||
writeFileSync(getGlobalConfigPath(), 'language: en\n', 'utf-8');
|
||||
|
||||
const config = loadGlobalConfig();
|
||||
config.interactivePreviewMovements = 7;
|
||||
saveGlobalConfig(config);
|
||||
invalidateGlobalConfigCache();
|
||||
|
||||
const reloaded = loadGlobalConfig();
|
||||
expect(reloaded.interactivePreviewMovements).toBe(7);
|
||||
});
|
||||
|
||||
it('should default interactive_preview_movements to 3', () => {
|
||||
const taktDir = join(testHomeDir, '.takt');
|
||||
mkdirSync(taktDir, { recursive: true });
|
||||
writeFileSync(getGlobalConfigPath(), 'language: en\n', 'utf-8');
|
||||
|
||||
const config = loadGlobalConfig();
|
||||
expect(config.interactivePreviewMovements).toBe(3);
|
||||
});
|
||||
|
||||
it('should accept interactive_preview_movements: 0 to disable', () => {
|
||||
const taktDir = join(testHomeDir, '.takt');
|
||||
mkdirSync(taktDir, { recursive: true });
|
||||
writeFileSync(
|
||||
getGlobalConfigPath(),
|
||||
'language: en\ninteractive_preview_movements: 0\n',
|
||||
'utf-8',
|
||||
);
|
||||
|
||||
const config = loadGlobalConfig();
|
||||
expect(config.interactivePreviewMovements).toBe(0);
|
||||
});
|
||||
|
||||
describe('provider/model compatibility validation', () => {
|
||||
it('should throw when provider is codex but model is a Claude alias (opus)', () => {
|
||||
const taktDir = join(testHomeDir, '.takt');
|
||||
|
||||
@ -608,10 +608,11 @@ describe('interactiveMode', () => {
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should ignore arrow keys in normal mode', async () => {
|
||||
// Given: text with arrow keys interspersed (arrows are ignored)
|
||||
it('should move cursor with arrow keys and insert at position', async () => {
|
||||
// Given: type "hllo", left 3 → cursor at 1, type "e", Enter
|
||||
// buffer: "h" + "e" + "llo" = "hello"
|
||||
setupRawStdin([
|
||||
'he\x1B[Dllo\x1B[C\r',
|
||||
'hllo\x1B[D\x1B[D\x1B[De\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
@ -619,7 +620,7 @@ describe('interactiveMode', () => {
|
||||
// When
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
// Then: arrows are ignored, text is "hello"
|
||||
// Then: arrow keys move cursor, "e" inserted at position 1 → "hello"
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('hello');
|
||||
@ -637,5 +638,302 @@ describe('interactiveMode', () => {
|
||||
// Then: empty input is skipped, falls through to /cancel
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should handle Ctrl+U to clear current line', async () => {
|
||||
// Given: type "hello", Ctrl+U (\x15), type "world", Enter
|
||||
setupRawStdin([
|
||||
'hello\x15world\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
// When
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
// Then: "hello" was cleared by Ctrl+U, only "world" remains
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('world');
|
||||
expect(prompt).not.toContain('helloworld');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should handle Ctrl+W to delete previous word', async () => {
|
||||
// Given: type "hello world", Ctrl+W (\x17), Enter
|
||||
setupRawStdin([
|
||||
'hello world\x17\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
// When
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
// Then: "world" was deleted by Ctrl+W, "hello " remains
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('hello');
|
||||
expect(prompt).not.toContain('world');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should handle Ctrl+H (backspace alternative) to delete character', async () => {
|
||||
// Given: type "ab", Ctrl+H (\x08), type "c", Enter
|
||||
setupRawStdin([
|
||||
'ab\x08c\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
// When
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
// Then: Ctrl+H deletes 'b', buffer is "ac"
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('ac');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should ignore unknown control characters (e.g. Ctrl+G)', async () => {
|
||||
// Given: type "ab", Ctrl+G (\x07, bell), type "c", Enter
|
||||
setupRawStdin([
|
||||
'ab\x07c\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
// When
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
// Then: Ctrl+G is ignored, buffer is "abc"
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('abc');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
});
|
||||
|
||||
describe('cursor management', () => {
|
||||
it('should move cursor left with arrow key and insert at position', async () => {
|
||||
// Given: type "helo", left 2, type "l", Enter → "hello" wait...
|
||||
// "helo" cursor at 4, left 2 → cursor at 2, type "l" → insert at 2: "helelo"? No.
|
||||
// Actually: "helo"[0]='h',[1]='e',[2]='l',[3]='o'
|
||||
// cursor at 4, left 2 → cursor at 2 (before 'l'), type 'l' → "hel" + "l" + "o" = "hello"? No.
|
||||
// Insert at index 2: "he" + "l" + "lo" = "hello". Yes!
|
||||
setupRawStdin([
|
||||
'helo\x1B[D\x1B[Dl\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
// When
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
// Then: buffer should be "hello"
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('hello');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should move cursor right with arrow key after moving left', async () => {
|
||||
// "hello" left 3 → cursor at 2, right 1 → cursor at 3, type "X" → "helXlo"
|
||||
setupRawStdin([
|
||||
'hello\x1B[D\x1B[D\x1B[D\x1B[CX\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('helXlo');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should handle Ctrl+A to move cursor to beginning of line', async () => {
|
||||
// Type "world", Ctrl+A, type "hello ", Enter → "hello world"
|
||||
setupRawStdin([
|
||||
'world\x01hello \r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('hello world');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should handle Ctrl+A via Kitty CSI-u to move cursor to beginning', async () => {
|
||||
// Type "test", Ctrl+A via Kitty ([97;5u), type "X", Enter → "Xtest"
|
||||
setupRawStdin([
|
||||
'test\x1B[97;5uX\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('Xtest');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should handle Ctrl+E to move cursor to end of line', async () => {
|
||||
// Type "hello", Ctrl+A, Ctrl+E, type "!", Enter → "hello!"
|
||||
setupRawStdin([
|
||||
'hello\x01\x05!\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('hello!');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should handle Ctrl+K to delete from cursor to end of line', async () => {
|
||||
// Type "hello world", left 6 (cursor before "world"), Ctrl+K, Enter → "hello"
|
||||
// Actually: "hello world" length=11, left 6 → cursor at 5 (space before "world")
|
||||
// Ctrl+K deletes from 5 to 11 → " world" removed → buffer "hello"
|
||||
setupRawStdin([
|
||||
'hello world\x1B[D\x1B[D\x1B[D\x1B[D\x1B[D\x1B[D\x0B\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('hello');
|
||||
expect(prompt).not.toContain('hello world');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should handle backspace in middle of text', async () => {
|
||||
// Type "helllo", left 2, backspace, Enter
|
||||
// "helllo" cursor at 6, left 2 → cursor at 4, backspace deletes [3]='l' → "hello"
|
||||
setupRawStdin([
|
||||
'helllo\x1B[D\x1B[D\x7F\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('hello');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should handle Home key to move to beginning of line', async () => {
|
||||
// Type "world", Home (\x1B[H), type "hello ", Enter → "hello world"
|
||||
setupRawStdin([
|
||||
'world\x1B[Hhello \r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('hello world');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should handle End key to move to end of line', async () => {
|
||||
// Type "hello", Home, End (\x1B[F), type "!", Enter → "hello!"
|
||||
setupRawStdin([
|
||||
'hello\x1B[H\x1B[F!\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('hello!');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should handle Ctrl+W with cursor in middle of text', async () => {
|
||||
// Type "hello world!", left 1 (before !), Ctrl+W, Enter
|
||||
// cursor at 11, Ctrl+W deletes "world" → "hello !"
|
||||
setupRawStdin([
|
||||
'hello world!\x1B[D\x17\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('hello !');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should handle Ctrl+U with cursor in middle of text', async () => {
|
||||
// Type "hello world", left 5 (cursor at 6, before "world"), Ctrl+U, Enter
|
||||
// Ctrl+U deletes "hello " → buffer becomes "world"
|
||||
setupRawStdin([
|
||||
'hello world\x1B[D\x1B[D\x1B[D\x1B[D\x1B[D\x15\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('world');
|
||||
expect(prompt).not.toContain('hello');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should not move cursor past line boundaries with arrow keys', async () => {
|
||||
// Type "ab", left 3 (should stop at 0), type "X", Enter → "Xab"
|
||||
setupRawStdin([
|
||||
'ab\x1B[D\x1B[D\x1B[DX\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('Xab');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
|
||||
it('should not move cursor past line end with right arrow', async () => {
|
||||
// Type "ab", right 2 (already at end, no effect), type "c", Enter → "abc"
|
||||
setupRawStdin([
|
||||
'ab\x1B[C\x1B[Cc\r',
|
||||
'/cancel\r',
|
||||
]);
|
||||
setupMockProvider(['response']);
|
||||
|
||||
const result = await interactiveMode('/project');
|
||||
|
||||
const mockProvider = mockGetProvider.mock.results[0]!.value as { _call: ReturnType<typeof vi.fn> };
|
||||
const prompt = mockProvider._call.mock.calls[0]?.[0] as string;
|
||||
expect(prompt).toContain('abc');
|
||||
expect(result.action).toBe('cancel');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
353
src/__tests__/it-notification-sound.test.ts
Normal file
353
src/__tests__/it-notification-sound.test.ts
Normal file
@ -0,0 +1,353 @@
|
||||
/**
|
||||
* Integration test: notification sound ON/OFF in executePiece().
|
||||
*
|
||||
* Verifies that:
|
||||
* - notificationSound: undefined (default) → playWarningSound / notifySuccess / notifyError are called
|
||||
* - notificationSound: true → playWarningSound / notifySuccess / notifyError are called
|
||||
* - notificationSound: false → playWarningSound / notifySuccess / notifyError are NOT called
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||
import { existsSync, rmSync, mkdirSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { randomUUID } from 'node:crypto';
|
||||
|
||||
// --- Hoisted mocks (must be before vi.mock calls) ---
|
||||
|
||||
const {
|
||||
MockPieceEngine,
|
||||
mockInterruptAllQueries,
|
||||
mockLoadGlobalConfig,
|
||||
mockNotifySuccess,
|
||||
mockNotifyError,
|
||||
mockPlayWarningSound,
|
||||
mockSelectOption,
|
||||
} = vi.hoisted(() => {
|
||||
// eslint-disable-next-line @typescript-eslint/no-require-imports
|
||||
const { EventEmitter: EE } = require('node:events') as typeof import('node:events');
|
||||
|
||||
const mockInterruptAllQueries = vi.fn().mockReturnValue(0);
|
||||
const mockLoadGlobalConfig = vi.fn().mockReturnValue({ provider: 'claude' });
|
||||
const mockNotifySuccess = vi.fn();
|
||||
const mockNotifyError = vi.fn();
|
||||
const mockPlayWarningSound = vi.fn();
|
||||
const mockSelectOption = vi.fn().mockResolvedValue('stop');
|
||||
|
||||
// Mock PieceEngine that can simulate complete / abort / iteration-limit
|
||||
class MockPieceEngine extends EE {
|
||||
static latestInstance: MockPieceEngine | null = null;
|
||||
|
||||
private runResolve: ((value: { status: string; iteration: number }) => void) | null = null;
|
||||
private onIterationLimit: ((req: unknown) => Promise<number | null>) | undefined;
|
||||
|
||||
constructor(
|
||||
_config: unknown,
|
||||
_cwd: string,
|
||||
_task: string,
|
||||
options: { onIterationLimit?: (req: unknown) => Promise<number | null> },
|
||||
) {
|
||||
super();
|
||||
this.onIterationLimit = options?.onIterationLimit;
|
||||
MockPieceEngine.latestInstance = this;
|
||||
}
|
||||
|
||||
abort(): void {
|
||||
const state = { status: 'aborted', iteration: 1 };
|
||||
this.emit('piece:abort', state, 'user_interrupted');
|
||||
if (this.runResolve) {
|
||||
this.runResolve(state);
|
||||
this.runResolve = null;
|
||||
}
|
||||
}
|
||||
|
||||
complete(): void {
|
||||
const state = { status: 'completed', iteration: 3 };
|
||||
this.emit('piece:complete', state);
|
||||
if (this.runResolve) {
|
||||
this.runResolve(state);
|
||||
this.runResolve = null;
|
||||
}
|
||||
}
|
||||
|
||||
async triggerIterationLimit(): Promise<void> {
|
||||
if (this.onIterationLimit) {
|
||||
await this.onIterationLimit({
|
||||
currentIteration: 10,
|
||||
maxIterations: 10,
|
||||
currentMovement: 'step1',
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
async run(): Promise<{ status: string; iteration: number }> {
|
||||
return new Promise((resolve) => {
|
||||
this.runResolve = resolve;
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
MockPieceEngine,
|
||||
mockInterruptAllQueries,
|
||||
mockLoadGlobalConfig,
|
||||
mockNotifySuccess,
|
||||
mockNotifyError,
|
||||
mockPlayWarningSound,
|
||||
mockSelectOption,
|
||||
};
|
||||
});
|
||||
|
||||
// --- Module mocks ---
|
||||
|
||||
vi.mock('../core/piece/index.js', () => ({
|
||||
PieceEngine: MockPieceEngine,
|
||||
}));
|
||||
|
||||
vi.mock('../infra/claude/index.js', () => ({
|
||||
callAiJudge: vi.fn(),
|
||||
detectRuleIndex: vi.fn(),
|
||||
interruptAllQueries: mockInterruptAllQueries,
|
||||
}));
|
||||
|
||||
vi.mock('../infra/config/index.js', () => ({
|
||||
loadPersonaSessions: vi.fn().mockReturnValue({}),
|
||||
updatePersonaSession: vi.fn(),
|
||||
loadWorktreeSessions: vi.fn().mockReturnValue({}),
|
||||
updateWorktreeSession: vi.fn(),
|
||||
loadGlobalConfig: mockLoadGlobalConfig,
|
||||
saveSessionState: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/context.js', () => ({
|
||||
isQuietMode: vi.fn().mockReturnValue(true),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/ui/index.js', () => ({
|
||||
header: vi.fn(),
|
||||
info: vi.fn(),
|
||||
warn: vi.fn(),
|
||||
error: vi.fn(),
|
||||
success: vi.fn(),
|
||||
status: vi.fn(),
|
||||
blankLine: vi.fn(),
|
||||
StreamDisplay: vi.fn().mockImplementation(() => ({
|
||||
createHandler: vi.fn().mockReturnValue(vi.fn()),
|
||||
flush: vi.fn(),
|
||||
})),
|
||||
}));
|
||||
|
||||
vi.mock('../infra/fs/index.js', () => ({
|
||||
generateSessionId: vi.fn().mockReturnValue('test-session-id'),
|
||||
createSessionLog: vi.fn().mockReturnValue({
|
||||
startTime: new Date().toISOString(),
|
||||
iterations: 0,
|
||||
}),
|
||||
finalizeSessionLog: vi.fn().mockImplementation((log, _status) => ({
|
||||
...log,
|
||||
status: _status,
|
||||
endTime: new Date().toISOString(),
|
||||
})),
|
||||
updateLatestPointer: vi.fn(),
|
||||
initNdjsonLog: vi.fn().mockReturnValue('/tmp/test-log.jsonl'),
|
||||
appendNdjsonLine: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/utils/index.js', () => ({
|
||||
createLogger: vi.fn().mockReturnValue({
|
||||
debug: vi.fn(),
|
||||
info: vi.fn(),
|
||||
warn: vi.fn(),
|
||||
error: vi.fn(),
|
||||
}),
|
||||
notifySuccess: mockNotifySuccess,
|
||||
notifyError: mockNotifyError,
|
||||
playWarningSound: mockPlayWarningSound,
|
||||
preventSleep: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/prompt/index.js', () => ({
|
||||
selectOption: mockSelectOption,
|
||||
promptInput: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/i18n/index.js', () => ({
|
||||
getLabel: vi.fn().mockImplementation((key: string) => key),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/exitCodes.js', () => ({
|
||||
EXIT_SIGINT: 130,
|
||||
}));
|
||||
|
||||
// --- Import under test (after mocks) ---
|
||||
|
||||
import { executePiece } from '../features/tasks/execute/pieceExecution.js';
|
||||
import type { PieceConfig } from '../core/models/index.js';
|
||||
|
||||
// --- Helpers ---
|
||||
|
||||
function makeConfig(): PieceConfig {
|
||||
return {
|
||||
name: 'test-notify',
|
||||
maxIterations: 10,
|
||||
initialMovement: 'step1',
|
||||
movements: [
|
||||
{
|
||||
name: 'step1',
|
||||
persona: '../agents/coder.md',
|
||||
personaDisplayName: 'coder',
|
||||
instructionTemplate: 'Do something',
|
||||
passPreviousResponse: true,
|
||||
rules: [
|
||||
{ condition: 'done', next: 'COMPLETE' },
|
||||
{ condition: 'fail', next: 'ABORT' },
|
||||
],
|
||||
},
|
||||
],
|
||||
};
|
||||
}
|
||||
|
||||
// --- Tests ---
|
||||
|
||||
describe('executePiece: notification sound behavior', () => {
|
||||
let tmpDir: string;
|
||||
let savedSigintListeners: ((...args: unknown[]) => void)[];
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
MockPieceEngine.latestInstance = null;
|
||||
tmpDir = join(tmpdir(), `takt-notify-it-${randomUUID()}`);
|
||||
mkdirSync(tmpDir, { recursive: true });
|
||||
mkdirSync(join(tmpDir, '.takt', 'reports'), { recursive: true });
|
||||
|
||||
savedSigintListeners = process.rawListeners('SIGINT') as ((...args: unknown[]) => void)[];
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
if (existsSync(tmpDir)) {
|
||||
rmSync(tmpDir, { recursive: true, force: true });
|
||||
}
|
||||
process.removeAllListeners('SIGINT');
|
||||
for (const listener of savedSigintListeners) {
|
||||
process.on('SIGINT', listener as NodeJS.SignalsListener);
|
||||
}
|
||||
process.removeAllListeners('uncaughtException');
|
||||
});
|
||||
|
||||
describe('notifySuccess on piece:complete', () => {
|
||||
it('should call notifySuccess when notificationSound is undefined (default)', async () => {
|
||||
mockLoadGlobalConfig.mockReturnValue({ provider: 'claude' });
|
||||
|
||||
const resultPromise = executePiece(makeConfig(), 'test task', tmpDir, { projectCwd: tmpDir });
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
|
||||
MockPieceEngine.latestInstance!.complete();
|
||||
await resultPromise;
|
||||
|
||||
expect(mockNotifySuccess).toHaveBeenCalledOnce();
|
||||
});
|
||||
|
||||
it('should call notifySuccess when notificationSound is true', async () => {
|
||||
mockLoadGlobalConfig.mockReturnValue({ provider: 'claude', notificationSound: true });
|
||||
|
||||
const resultPromise = executePiece(makeConfig(), 'test task', tmpDir, { projectCwd: tmpDir });
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
|
||||
MockPieceEngine.latestInstance!.complete();
|
||||
await resultPromise;
|
||||
|
||||
expect(mockNotifySuccess).toHaveBeenCalledOnce();
|
||||
});
|
||||
|
||||
it('should NOT call notifySuccess when notificationSound is false', async () => {
|
||||
mockLoadGlobalConfig.mockReturnValue({ provider: 'claude', notificationSound: false });
|
||||
|
||||
const resultPromise = executePiece(makeConfig(), 'test task', tmpDir, { projectCwd: tmpDir });
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
|
||||
MockPieceEngine.latestInstance!.complete();
|
||||
await resultPromise;
|
||||
|
||||
expect(mockNotifySuccess).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('notifyError on piece:abort', () => {
|
||||
it('should call notifyError when notificationSound is undefined (default)', async () => {
|
||||
mockLoadGlobalConfig.mockReturnValue({ provider: 'claude' });
|
||||
|
||||
const resultPromise = executePiece(makeConfig(), 'test task', tmpDir, { projectCwd: tmpDir });
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
|
||||
MockPieceEngine.latestInstance!.abort();
|
||||
await resultPromise;
|
||||
|
||||
expect(mockNotifyError).toHaveBeenCalledOnce();
|
||||
});
|
||||
|
||||
it('should call notifyError when notificationSound is true', async () => {
|
||||
mockLoadGlobalConfig.mockReturnValue({ provider: 'claude', notificationSound: true });
|
||||
|
||||
const resultPromise = executePiece(makeConfig(), 'test task', tmpDir, { projectCwd: tmpDir });
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
|
||||
MockPieceEngine.latestInstance!.abort();
|
||||
await resultPromise;
|
||||
|
||||
expect(mockNotifyError).toHaveBeenCalledOnce();
|
||||
});
|
||||
|
||||
it('should NOT call notifyError when notificationSound is false', async () => {
|
||||
mockLoadGlobalConfig.mockReturnValue({ provider: 'claude', notificationSound: false });
|
||||
|
||||
const resultPromise = executePiece(makeConfig(), 'test task', tmpDir, { projectCwd: tmpDir });
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
|
||||
MockPieceEngine.latestInstance!.abort();
|
||||
await resultPromise;
|
||||
|
||||
expect(mockNotifyError).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('playWarningSound on iteration limit', () => {
|
||||
it('should call playWarningSound when notificationSound is undefined (default)', async () => {
|
||||
mockLoadGlobalConfig.mockReturnValue({ provider: 'claude' });
|
||||
|
||||
const resultPromise = executePiece(makeConfig(), 'test task', tmpDir, { projectCwd: tmpDir });
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
|
||||
await MockPieceEngine.latestInstance!.triggerIterationLimit();
|
||||
MockPieceEngine.latestInstance!.abort();
|
||||
await resultPromise;
|
||||
|
||||
expect(mockPlayWarningSound).toHaveBeenCalledOnce();
|
||||
});
|
||||
|
||||
it('should call playWarningSound when notificationSound is true', async () => {
|
||||
mockLoadGlobalConfig.mockReturnValue({ provider: 'claude', notificationSound: true });
|
||||
|
||||
const resultPromise = executePiece(makeConfig(), 'test task', tmpDir, { projectCwd: tmpDir });
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
|
||||
await MockPieceEngine.latestInstance!.triggerIterationLimit();
|
||||
MockPieceEngine.latestInstance!.abort();
|
||||
await resultPromise;
|
||||
|
||||
expect(mockPlayWarningSound).toHaveBeenCalledOnce();
|
||||
});
|
||||
|
||||
it('should NOT call playWarningSound when notificationSound is false', async () => {
|
||||
mockLoadGlobalConfig.mockReturnValue({ provider: 'claude', notificationSound: false });
|
||||
|
||||
const resultPromise = executePiece(makeConfig(), 'test task', tmpDir, { projectCwd: tmpDir });
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
|
||||
await MockPieceEngine.latestInstance!.triggerIterationLimit();
|
||||
MockPieceEngine.latestInstance!.abort();
|
||||
await resultPromise;
|
||||
|
||||
expect(mockPlayWarningSound).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
});
|
||||
@ -447,6 +447,131 @@ movements:
|
||||
});
|
||||
});
|
||||
|
||||
describe('Piece Loader IT: mcp_servers parsing', () => {
|
||||
let testDir: string;
|
||||
|
||||
beforeEach(() => {
|
||||
testDir = createTestDir();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(testDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('should parse mcp_servers from YAML to PieceMovement.mcpServers', () => {
|
||||
const piecesDir = join(testDir, '.takt', 'pieces');
|
||||
mkdirSync(piecesDir, { recursive: true });
|
||||
|
||||
writeFileSync(join(piecesDir, 'with-mcp.yaml'), `
|
||||
name: with-mcp
|
||||
description: Piece with MCP servers
|
||||
max_iterations: 5
|
||||
initial_movement: e2e-test
|
||||
|
||||
movements:
|
||||
- name: e2e-test
|
||||
persona: coder
|
||||
mcp_servers:
|
||||
playwright:
|
||||
command: npx
|
||||
args: ["-y", "@anthropic-ai/mcp-server-playwright"]
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Bash
|
||||
- mcp__playwright__*
|
||||
rules:
|
||||
- condition: Done
|
||||
next: COMPLETE
|
||||
instruction: "Run E2E tests"
|
||||
`);
|
||||
|
||||
const config = loadPiece('with-mcp', testDir);
|
||||
|
||||
expect(config).not.toBeNull();
|
||||
const e2eStep = config!.movements.find((s) => s.name === 'e2e-test');
|
||||
expect(e2eStep).toBeDefined();
|
||||
expect(e2eStep!.mcpServers).toEqual({
|
||||
playwright: {
|
||||
command: 'npx',
|
||||
args: ['-y', '@anthropic-ai/mcp-server-playwright'],
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should allow movement without mcp_servers', () => {
|
||||
const piecesDir = join(testDir, '.takt', 'pieces');
|
||||
mkdirSync(piecesDir, { recursive: true });
|
||||
|
||||
writeFileSync(join(piecesDir, 'no-mcp.yaml'), `
|
||||
name: no-mcp
|
||||
description: Piece without MCP servers
|
||||
max_iterations: 5
|
||||
initial_movement: implement
|
||||
|
||||
movements:
|
||||
- name: implement
|
||||
persona: coder
|
||||
rules:
|
||||
- condition: Done
|
||||
next: COMPLETE
|
||||
instruction: "Implement the feature"
|
||||
`);
|
||||
|
||||
const config = loadPiece('no-mcp', testDir);
|
||||
|
||||
expect(config).not.toBeNull();
|
||||
const implementStep = config!.movements.find((s) => s.name === 'implement');
|
||||
expect(implementStep).toBeDefined();
|
||||
expect(implementStep!.mcpServers).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should parse mcp_servers with multiple servers and transports', () => {
|
||||
const piecesDir = join(testDir, '.takt', 'pieces');
|
||||
mkdirSync(piecesDir, { recursive: true });
|
||||
|
||||
writeFileSync(join(piecesDir, 'multi-mcp.yaml'), `
|
||||
name: multi-mcp
|
||||
description: Piece with multiple MCP servers
|
||||
max_iterations: 5
|
||||
initial_movement: test
|
||||
|
||||
movements:
|
||||
- name: test
|
||||
persona: coder
|
||||
mcp_servers:
|
||||
playwright:
|
||||
command: npx
|
||||
args: ["-y", "@anthropic-ai/mcp-server-playwright"]
|
||||
remote-api:
|
||||
type: http
|
||||
url: http://localhost:3000/mcp
|
||||
headers:
|
||||
Authorization: "Bearer token123"
|
||||
rules:
|
||||
- condition: Done
|
||||
next: COMPLETE
|
||||
instruction: "Run tests"
|
||||
`);
|
||||
|
||||
const config = loadPiece('multi-mcp', testDir);
|
||||
|
||||
expect(config).not.toBeNull();
|
||||
const testStep = config!.movements.find((s) => s.name === 'test');
|
||||
expect(testStep).toBeDefined();
|
||||
expect(testStep!.mcpServers).toEqual({
|
||||
playwright: {
|
||||
command: 'npx',
|
||||
args: ['-y', '@anthropic-ai/mcp-server-playwright'],
|
||||
},
|
||||
'remote-api': {
|
||||
type: 'http',
|
||||
url: 'http://localhost:3000/mcp',
|
||||
headers: { Authorization: 'Bearer token123' },
|
||||
},
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('Piece Loader IT: invalid YAML handling', () => {
|
||||
let testDir: string;
|
||||
|
||||
|
||||
@ -65,6 +65,7 @@ vi.mock('../infra/github/pr.js', () => ({
|
||||
|
||||
vi.mock('../infra/task/git.js', () => ({
|
||||
stageAndCommit: vi.fn().mockReturnValue('abc1234'),
|
||||
getCurrentBranch: vi.fn().mockReturnValue('main'),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/ui/index.js', () => ({
|
||||
|
||||
@ -128,6 +128,8 @@ vi.mock('../shared/utils/index.js', () => ({
|
||||
}),
|
||||
notifySuccess: vi.fn(),
|
||||
notifyError: vi.fn(),
|
||||
playWarningSound: vi.fn(),
|
||||
preventSleep: vi.fn(),
|
||||
isDebugEnabled: vi.fn().mockReturnValue(false),
|
||||
writePromptLog: vi.fn(),
|
||||
}));
|
||||
|
||||
614
src/__tests__/lineEditor.test.ts
Normal file
614
src/__tests__/lineEditor.test.ts
Normal file
@ -0,0 +1,614 @@
|
||||
/**
|
||||
* Tests for lineEditor: parseInputData and readMultilineInput cursor navigation
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import { parseInputData, type InputCallbacks } from '../features/interactive/lineEditor.js';
|
||||
|
||||
function createCallbacks(): InputCallbacks & { calls: string[] } {
|
||||
const calls: string[] = [];
|
||||
return {
|
||||
calls,
|
||||
onPasteStart() { calls.push('pasteStart'); },
|
||||
onPasteEnd() { calls.push('pasteEnd'); },
|
||||
onShiftEnter() { calls.push('shiftEnter'); },
|
||||
onArrowLeft() { calls.push('left'); },
|
||||
onArrowRight() { calls.push('right'); },
|
||||
onArrowUp() { calls.push('up'); },
|
||||
onArrowDown() { calls.push('down'); },
|
||||
onWordLeft() { calls.push('wordLeft'); },
|
||||
onWordRight() { calls.push('wordRight'); },
|
||||
onHome() { calls.push('home'); },
|
||||
onEnd() { calls.push('end'); },
|
||||
onChar(ch: string) { calls.push(`char:${ch}`); },
|
||||
};
|
||||
}
|
||||
|
||||
describe('parseInputData', () => {
|
||||
describe('arrow key detection', () => {
|
||||
it('should detect arrow up escape sequence', () => {
|
||||
// Given
|
||||
const cb = createCallbacks();
|
||||
// When
|
||||
parseInputData('\x1B[A', cb);
|
||||
// Then
|
||||
expect(cb.calls).toEqual(['up']);
|
||||
});
|
||||
|
||||
it('should detect arrow down escape sequence', () => {
|
||||
// Given
|
||||
const cb = createCallbacks();
|
||||
// When
|
||||
parseInputData('\x1B[B', cb);
|
||||
// Then
|
||||
expect(cb.calls).toEqual(['down']);
|
||||
});
|
||||
|
||||
it('should detect arrow left escape sequence', () => {
|
||||
// Given
|
||||
const cb = createCallbacks();
|
||||
// When
|
||||
parseInputData('\x1B[D', cb);
|
||||
// Then
|
||||
expect(cb.calls).toEqual(['left']);
|
||||
});
|
||||
|
||||
it('should detect arrow right escape sequence', () => {
|
||||
// Given
|
||||
const cb = createCallbacks();
|
||||
// When
|
||||
parseInputData('\x1B[C', cb);
|
||||
// Then
|
||||
expect(cb.calls).toEqual(['right']);
|
||||
});
|
||||
|
||||
it('should parse mixed arrows and characters', () => {
|
||||
// Given
|
||||
const cb = createCallbacks();
|
||||
// When: type "a", up, "b", down
|
||||
parseInputData('a\x1B[Ab\x1B[B', cb);
|
||||
// Then
|
||||
expect(cb.calls).toEqual(['char:a', 'up', 'char:b', 'down']);
|
||||
});
|
||||
});
|
||||
|
||||
describe('option+arrow key detection', () => {
|
||||
it('should detect ESC b as word left (Terminal.app style)', () => {
|
||||
// Given
|
||||
const cb = createCallbacks();
|
||||
// When
|
||||
parseInputData('\x1Bb', cb);
|
||||
// Then
|
||||
expect(cb.calls).toEqual(['wordLeft']);
|
||||
});
|
||||
|
||||
it('should detect ESC f as word right (Terminal.app style)', () => {
|
||||
// Given
|
||||
const cb = createCallbacks();
|
||||
// When
|
||||
parseInputData('\x1Bf', cb);
|
||||
// Then
|
||||
expect(cb.calls).toEqual(['wordRight']);
|
||||
});
|
||||
|
||||
it('should detect CSI 1;3D as word left (iTerm2/Kitty style)', () => {
|
||||
// Given
|
||||
const cb = createCallbacks();
|
||||
// When
|
||||
parseInputData('\x1B[1;3D', cb);
|
||||
// Then
|
||||
expect(cb.calls).toEqual(['wordLeft']);
|
||||
});
|
||||
|
||||
it('should detect CSI 1;3C as word right (iTerm2/Kitty style)', () => {
|
||||
// Given
|
||||
const cb = createCallbacks();
|
||||
// When
|
||||
parseInputData('\x1B[1;3C', cb);
|
||||
// Then
|
||||
expect(cb.calls).toEqual(['wordRight']);
|
||||
});
|
||||
|
||||
it('should not insert characters for option+arrow sequences', () => {
|
||||
// Given
|
||||
const cb = createCallbacks();
|
||||
// When: ESC b should not produce 'char:b'
|
||||
parseInputData('\x1Bb\x1Bf', cb);
|
||||
// Then
|
||||
expect(cb.calls).toEqual(['wordLeft', 'wordRight']);
|
||||
expect(cb.calls).not.toContain('char:b');
|
||||
expect(cb.calls).not.toContain('char:f');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('readMultilineInput cursor navigation', () => {
|
||||
let savedIsTTY: boolean | undefined;
|
||||
let savedIsRaw: boolean | undefined;
|
||||
let savedSetRawMode: typeof process.stdin.setRawMode | undefined;
|
||||
let savedStdoutWrite: typeof process.stdout.write;
|
||||
let savedStdinOn: typeof process.stdin.on;
|
||||
let savedStdinRemoveListener: typeof process.stdin.removeListener;
|
||||
let savedStdinResume: typeof process.stdin.resume;
|
||||
let savedStdinPause: typeof process.stdin.pause;
|
||||
let stdoutCalls: string[];
|
||||
|
||||
function setupRawStdin(rawInputs: string[]): void {
|
||||
savedIsTTY = process.stdin.isTTY;
|
||||
savedIsRaw = process.stdin.isRaw;
|
||||
savedSetRawMode = process.stdin.setRawMode;
|
||||
savedStdoutWrite = process.stdout.write;
|
||||
savedStdinOn = process.stdin.on;
|
||||
savedStdinRemoveListener = process.stdin.removeListener;
|
||||
savedStdinResume = process.stdin.resume;
|
||||
savedStdinPause = process.stdin.pause;
|
||||
|
||||
Object.defineProperty(process.stdin, 'isTTY', { value: true, configurable: true });
|
||||
Object.defineProperty(process.stdin, 'isRaw', { value: false, configurable: true, writable: true });
|
||||
process.stdin.setRawMode = vi.fn((mode: boolean) => {
|
||||
(process.stdin as unknown as { isRaw: boolean }).isRaw = mode;
|
||||
return process.stdin;
|
||||
}) as unknown as typeof process.stdin.setRawMode;
|
||||
stdoutCalls = [];
|
||||
process.stdout.write = vi.fn((data: string | Uint8Array) => {
|
||||
stdoutCalls.push(typeof data === 'string' ? data : data.toString());
|
||||
return true;
|
||||
}) as unknown as typeof process.stdout.write;
|
||||
process.stdin.resume = vi.fn(() => process.stdin) as unknown as typeof process.stdin.resume;
|
||||
process.stdin.pause = vi.fn(() => process.stdin) as unknown as typeof process.stdin.pause;
|
||||
|
||||
let currentHandler: ((data: Buffer) => void) | null = null;
|
||||
let inputIndex = 0;
|
||||
|
||||
process.stdin.on = vi.fn(((event: string, handler: (...args: unknown[]) => void) => {
|
||||
if (event === 'data') {
|
||||
currentHandler = handler as (data: Buffer) => void;
|
||||
if (inputIndex < rawInputs.length) {
|
||||
const data = rawInputs[inputIndex]!;
|
||||
inputIndex++;
|
||||
queueMicrotask(() => {
|
||||
if (currentHandler) {
|
||||
currentHandler(Buffer.from(data, 'utf-8'));
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
return process.stdin;
|
||||
}) as typeof process.stdin.on);
|
||||
|
||||
process.stdin.removeListener = vi.fn(((event: string) => {
|
||||
if (event === 'data') {
|
||||
currentHandler = null;
|
||||
}
|
||||
return process.stdin;
|
||||
}) as typeof process.stdin.removeListener);
|
||||
}
|
||||
|
||||
function restoreStdin(): void {
|
||||
if (savedIsTTY !== undefined) {
|
||||
Object.defineProperty(process.stdin, 'isTTY', { value: savedIsTTY, configurable: true });
|
||||
}
|
||||
if (savedIsRaw !== undefined) {
|
||||
Object.defineProperty(process.stdin, 'isRaw', { value: savedIsRaw, configurable: true, writable: true });
|
||||
}
|
||||
if (savedSetRawMode) process.stdin.setRawMode = savedSetRawMode;
|
||||
if (savedStdoutWrite) process.stdout.write = savedStdoutWrite;
|
||||
if (savedStdinOn) process.stdin.on = savedStdinOn;
|
||||
if (savedStdinRemoveListener) process.stdin.removeListener = savedStdinRemoveListener;
|
||||
if (savedStdinResume) process.stdin.resume = savedStdinResume;
|
||||
if (savedStdinPause) process.stdin.pause = savedStdinPause;
|
||||
}
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
restoreStdin();
|
||||
});
|
||||
|
||||
// We need to dynamically import after mocking stdin
|
||||
async function callReadMultilineInput(prompt: string): Promise<string | null> {
|
||||
const { readMultilineInput } = await import('../features/interactive/lineEditor.js');
|
||||
return readMultilineInput(prompt);
|
||||
}
|
||||
|
||||
describe('left arrow line wrap', () => {
|
||||
it('should move to end of previous line when at line start', async () => {
|
||||
// Given: "abc\ndef" with cursor at start of "def", press left → cursor at end of "abc" (pos 3)
|
||||
// Type "abc", Shift+Enter, "def", Home (to line start of "def"), Left, type "X", Enter
|
||||
// "abc" + "\n" + "def" → left wraps to end of "abc" → insert "X" at pos 3 → "abcX\ndef"
|
||||
setupRawStdin([
|
||||
'abc\x1B[13;2udef\x1B[H\x1B[DX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('abcX\ndef');
|
||||
});
|
||||
|
||||
it('should not wrap when at start of first line', async () => {
|
||||
// Given: "abc", Home, Left (should do nothing at pos 0), type "X", Enter
|
||||
setupRawStdin([
|
||||
'abc\x1B[H\x1B[DX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('Xabc');
|
||||
});
|
||||
});
|
||||
|
||||
describe('right arrow line wrap', () => {
|
||||
it('should move to start of next line when at line end', async () => {
|
||||
// Given: "abc\ndef", cursor at end of "abc" (pos 3), press right → cursor at start of "def" (pos 4)
|
||||
// Type "abc", Shift+Enter, "def", then navigate: Home → start of "def", Up → same col in "abc"=start,
|
||||
// End → end of "abc", Right → wraps to start of "def", type "X", Enter
|
||||
// Result: "abc\nXdef"
|
||||
setupRawStdin([
|
||||
'abc\x1B[13;2udef\x1B[H\x1B[A\x1B[F\x1B[CX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('abc\nXdef');
|
||||
});
|
||||
|
||||
it('should not wrap when at end of last line', async () => {
|
||||
// Given: "abc", End (already at end), Right (no next line), type "X", Enter
|
||||
setupRawStdin([
|
||||
'abc\x1B[F\x1B[CX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('abcX');
|
||||
});
|
||||
});
|
||||
|
||||
describe('arrow up', () => {
|
||||
it('should move to previous line at same column', async () => {
|
||||
// Given: "abcde\nfgh", cursor at end of "fgh" (col 3), press up → col 3 in "abcde" (pos 3)
|
||||
// Insert "X" → "abcXde\nfgh"
|
||||
setupRawStdin([
|
||||
'abcde\x1B[13;2ufgh\x1B[AX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('abcXde\nfgh');
|
||||
});
|
||||
|
||||
it('should clamp to end of shorter previous line', async () => {
|
||||
// Given: "ab\ncdefg", cursor at end of "cdefg" (col 5), press up → col 2 (end of "ab") (pos 2)
|
||||
// Insert "X" → "abX\ncdefg"
|
||||
setupRawStdin([
|
||||
'ab\x1B[13;2ucdefg\x1B[AX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('abX\ncdefg');
|
||||
});
|
||||
|
||||
it('should do nothing when on first line', async () => {
|
||||
// Given: "abc", press up (no previous line), type "X", Enter
|
||||
setupRawStdin([
|
||||
'abc\x1B[AX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('abcX');
|
||||
});
|
||||
});
|
||||
|
||||
describe('arrow down', () => {
|
||||
it('should move to next line at same column', async () => {
|
||||
// Given: "abcde\nfgh", cursor at col 2 of "abcde" (use Home+Right+Right), press down → col 2 in "fgh"
|
||||
// Insert "X" → "abcde\nfgXh"
|
||||
// Strategy: type "abcde", Shift+Enter, "fgh", Up (→ end of "abcde" col 3), Home, Right, Right, Down, X, Enter
|
||||
setupRawStdin([
|
||||
'abcde\x1B[13;2ufgh\x1B[A\x1B[H\x1B[C\x1B[C\x1B[BX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('abcde\nfgXh');
|
||||
});
|
||||
|
||||
it('should clamp to end of shorter next line', async () => {
|
||||
// Given: "abcde\nfg", cursor at col 4 in "abcde", press down → col 2 (end of "fg")
|
||||
// Insert "X" → "abcde\nfgX"
|
||||
setupRawStdin([
|
||||
'abcde\x1B[13;2ufg\x1B[A\x1B[H\x1B[C\x1B[C\x1B[C\x1B[C\x1B[BX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('abcde\nfgX');
|
||||
});
|
||||
|
||||
it('should do nothing when on last line', async () => {
|
||||
// Given: "abc", press down (no next line), type "X", Enter
|
||||
setupRawStdin([
|
||||
'abc\x1B[BX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('abcX');
|
||||
});
|
||||
|
||||
it('should do nothing when next line has no text beyond newline', async () => {
|
||||
// Given: "abc" with no next line, down does nothing
|
||||
// buffer = "abc", lineEnd = 3, buffer.length = 3, so lineEnd >= buffer.length → return
|
||||
setupRawStdin([
|
||||
'abc\x1B[BX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('abcX');
|
||||
});
|
||||
});
|
||||
|
||||
describe('terminal escape sequences for line navigation', () => {
|
||||
it('should emit CUU and CHA when moving up', async () => {
|
||||
// Given: "ab\ncd", cursor at end of "cd", press up
|
||||
setupRawStdin([
|
||||
'ab\x1B[13;2ucd\x1B[A\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
await callReadMultilineInput('> ');
|
||||
|
||||
// Then: should contain \x1B[A (cursor up) and \x1B[{n}G (cursor horizontal absolute)
|
||||
const hasUpMove = stdoutCalls.some(c => c === '\x1B[A');
|
||||
const hasCha = stdoutCalls.some(c => /^\x1B\[\d+G$/.test(c));
|
||||
expect(hasUpMove).toBe(true);
|
||||
expect(hasCha).toBe(true);
|
||||
});
|
||||
|
||||
it('should emit CUD and CHA when moving down', async () => {
|
||||
// Given: "ab\ncd", cursor at end of "ab" (navigate up then down)
|
||||
setupRawStdin([
|
||||
'ab\x1B[13;2ucd\x1B[A\x1B[B\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
await callReadMultilineInput('> ');
|
||||
|
||||
// Then: should contain \x1B[B (cursor down) and \x1B[{n}G
|
||||
const hasDownMove = stdoutCalls.some(c => c === '\x1B[B');
|
||||
const hasCha = stdoutCalls.some(c => /^\x1B\[\d+G$/.test(c));
|
||||
expect(hasDownMove).toBe(true);
|
||||
expect(hasCha).toBe(true);
|
||||
});
|
||||
|
||||
it('should emit CUU and CHA when left wraps to previous line', async () => {
|
||||
// Given: "ab\ncd", cursor at start of "cd", press left
|
||||
setupRawStdin([
|
||||
'ab\x1B[13;2ucd\x1B[H\x1B[D\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
await callReadMultilineInput('> ');
|
||||
|
||||
// Then: should contain \x1B[A (up) for wrapping to previous line
|
||||
const hasUpMove = stdoutCalls.some(c => c === '\x1B[A');
|
||||
expect(hasUpMove).toBe(true);
|
||||
});
|
||||
|
||||
it('should emit CUD and CHA when right wraps to next line', async () => {
|
||||
// Given: "ab\ncd", cursor at end of "ab", press right
|
||||
setupRawStdin([
|
||||
'ab\x1B[13;2ucd\x1B[A\x1B[F\x1B[C\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
await callReadMultilineInput('> ');
|
||||
|
||||
// Then: should contain \x1B[B (down) for wrapping to next line
|
||||
const hasDownMove = stdoutCalls.some(c => c === '\x1B[B');
|
||||
expect(hasDownMove).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('full-width character support', () => {
|
||||
it('should move cursor by 2 columns for full-width character with arrow left', async () => {
|
||||
// Given: "あいう", cursor at end (col 6 in display), press left → cursor before "う" (display col 4)
|
||||
// Insert "X" → "あいXう"
|
||||
setupRawStdin([
|
||||
'あいう\x1B[DX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('あいXう');
|
||||
});
|
||||
|
||||
it('should emit correct terminal width for backspace on full-width char', async () => {
|
||||
// Given: "あいう", press backspace → "あい"
|
||||
setupRawStdin([
|
||||
'あいう\x7F\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('あい');
|
||||
// Should move 2 columns back for the full-width character
|
||||
const hasTwoColBack = stdoutCalls.some(c => c === '\x1B[2D');
|
||||
expect(hasTwoColBack).toBe(true);
|
||||
});
|
||||
|
||||
it('should navigate up/down correctly with full-width characters', async () => {
|
||||
// Given: "あいう\nabc", cursor at end of "abc" (display col 3)
|
||||
// Press up → display col 3 in "あいう" → between "あ" and "い" (buffer pos 1, display col 2)
|
||||
// because display col 3 falls in the middle of "い" (cols 2-3), findPositionByDisplayColumn stops at col 2
|
||||
// Insert "X" → "あXいう\nabc"
|
||||
setupRawStdin([
|
||||
'あいう\x1B[13;2uabc\x1B[AX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('あXいう\nabc');
|
||||
});
|
||||
|
||||
it('should calculate terminal column correctly with full-width on first line', async () => {
|
||||
// Given: "あ\nb", cursor at "b", press up → first line, prompt ">" (2 cols) + "あ" (2 cols) = CHA col 3
|
||||
// Since target display col 1 < "あ" width 2, cursor goes to pos 0 (before "あ")
|
||||
// Insert "X" → "Xあ\nb"
|
||||
setupRawStdin([
|
||||
'あ\x1B[13;2ub\x1B[AX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('Xあ\nb');
|
||||
});
|
||||
});
|
||||
|
||||
describe('word movement (option+arrow)', () => {
|
||||
it('should move left by one word with ESC b', async () => {
|
||||
// Given: "hello world", cursor at end, press Option+Left → cursor before "world", insert "X"
|
||||
// Result: "hello Xworld"
|
||||
setupRawStdin([
|
||||
'hello world\x1BbX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('hello Xworld');
|
||||
});
|
||||
|
||||
it('should move right by one word with ESC f', async () => {
|
||||
// Given: "hello world", Home, Option+Right → skip "hello" then space → cursor at "world", insert "X"
|
||||
// Result: "hello Xworld"
|
||||
setupRawStdin([
|
||||
'hello world\x1B[H\x1BfX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('hello Xworld');
|
||||
});
|
||||
|
||||
it('should not move past line start with word left', async () => {
|
||||
// Given: "abc\ndef", cursor at start of "def", Option+Left does nothing, type "X"
|
||||
setupRawStdin([
|
||||
'abc\x1B[13;2udef\x1B[H\x1BbX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('abc\nXdef');
|
||||
});
|
||||
|
||||
it('should not move past line end with word right', async () => {
|
||||
// Given: "abc\ndef", cursor at end of "abc" (navigate up from "def"), Option+Right does nothing, type "X"
|
||||
setupRawStdin([
|
||||
'abc\x1B[13;2udef\x1B[A\x1BfX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('abcX\ndef');
|
||||
});
|
||||
|
||||
it('should skip spaces then word chars with word left', async () => {
|
||||
// Given: "foo bar baz", cursor at end, Option+Left → cursor before "baz"
|
||||
setupRawStdin([
|
||||
'foo bar baz\x1BbX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('foo bar Xbaz');
|
||||
});
|
||||
|
||||
it('should work with CSI 1;3D format', async () => {
|
||||
// Given: "hello world", cursor at end, CSI Option+Left → cursor before "world", insert "X"
|
||||
setupRawStdin([
|
||||
'hello world\x1B[1;3DX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('hello Xworld');
|
||||
});
|
||||
});
|
||||
|
||||
describe('three-line navigation', () => {
|
||||
it('should navigate across three lines with up and down', async () => {
|
||||
// Given: "abc\ndef\nghi", cursor at end of "ghi" (col 3)
|
||||
// Press up twice → col 3 in "abc" (clamped to 3), insert "X" → "abcX\ndef\nghi"
|
||||
setupRawStdin([
|
||||
'abc\x1B[13;2udef\x1B[13;2ughi\x1B[A\x1B[AX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('abcX\ndef\nghi');
|
||||
});
|
||||
|
||||
it('should navigate down from first line to third line', async () => {
|
||||
// Given: "abc\ndef\nghi", navigate to first line, then down twice to "ghi"
|
||||
// Type all, then Up Up (→ first line end col 3), Down Down (→ third line col 3), type "X"
|
||||
setupRawStdin([
|
||||
'abc\x1B[13;2udef\x1B[13;2ughi\x1B[A\x1B[A\x1B[B\x1B[BX\r',
|
||||
]);
|
||||
|
||||
// When
|
||||
const result = await callReadMultilineInput('> ');
|
||||
|
||||
// Then
|
||||
expect(result).toBe('abc\ndef\nghiX');
|
||||
});
|
||||
});
|
||||
});
|
||||
@ -8,6 +8,7 @@ import {
|
||||
StatusSchema,
|
||||
PermissionModeSchema,
|
||||
PieceConfigRawSchema,
|
||||
McpServerConfigSchema,
|
||||
CustomAgentConfigSchema,
|
||||
GlobalConfigSchema,
|
||||
} from '../core/models/index.js';
|
||||
@ -143,6 +144,210 @@ describe('PieceConfigRawSchema', () => {
|
||||
|
||||
expect(() => PieceConfigRawSchema.parse(config)).toThrow();
|
||||
});
|
||||
|
||||
it('should parse movement with stdio mcp_servers', () => {
|
||||
const config = {
|
||||
name: 'test-piece',
|
||||
movements: [
|
||||
{
|
||||
name: 'e2e-test',
|
||||
persona: 'coder',
|
||||
mcp_servers: {
|
||||
playwright: {
|
||||
command: 'npx',
|
||||
args: ['-y', '@anthropic-ai/mcp-server-playwright'],
|
||||
},
|
||||
},
|
||||
allowed_tools: ['mcp__playwright__*'],
|
||||
instruction: '{task}',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const result = PieceConfigRawSchema.parse(config);
|
||||
expect(result.movements![0]?.mcp_servers).toEqual({
|
||||
playwright: {
|
||||
command: 'npx',
|
||||
args: ['-y', '@anthropic-ai/mcp-server-playwright'],
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should parse movement with sse mcp_servers', () => {
|
||||
const config = {
|
||||
name: 'test-piece',
|
||||
movements: [
|
||||
{
|
||||
name: 'step1',
|
||||
persona: 'coder',
|
||||
mcp_servers: {
|
||||
remote: {
|
||||
type: 'sse',
|
||||
url: 'http://localhost:8080/sse',
|
||||
headers: { Authorization: 'Bearer token' },
|
||||
},
|
||||
},
|
||||
instruction: '{task}',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const result = PieceConfigRawSchema.parse(config);
|
||||
expect(result.movements![0]?.mcp_servers).toEqual({
|
||||
remote: {
|
||||
type: 'sse',
|
||||
url: 'http://localhost:8080/sse',
|
||||
headers: { Authorization: 'Bearer token' },
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should parse movement with http mcp_servers', () => {
|
||||
const config = {
|
||||
name: 'test-piece',
|
||||
movements: [
|
||||
{
|
||||
name: 'step1',
|
||||
persona: 'coder',
|
||||
mcp_servers: {
|
||||
api: {
|
||||
type: 'http',
|
||||
url: 'http://localhost:3000/mcp',
|
||||
},
|
||||
},
|
||||
instruction: '{task}',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const result = PieceConfigRawSchema.parse(config);
|
||||
expect(result.movements![0]?.mcp_servers).toEqual({
|
||||
api: {
|
||||
type: 'http',
|
||||
url: 'http://localhost:3000/mcp',
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should allow omitting mcp_servers', () => {
|
||||
const config = {
|
||||
name: 'test-piece',
|
||||
movements: [
|
||||
{
|
||||
name: 'step1',
|
||||
persona: 'coder',
|
||||
instruction: '{task}',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const result = PieceConfigRawSchema.parse(config);
|
||||
expect(result.movements![0]?.mcp_servers).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should reject invalid mcp_servers (missing command for stdio)', () => {
|
||||
const config = {
|
||||
name: 'test-piece',
|
||||
movements: [
|
||||
{
|
||||
name: 'step1',
|
||||
persona: 'coder',
|
||||
mcp_servers: {
|
||||
broken: { args: ['--flag'] },
|
||||
},
|
||||
instruction: '{task}',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
expect(() => PieceConfigRawSchema.parse(config)).toThrow();
|
||||
});
|
||||
|
||||
it('should reject invalid mcp_servers (missing url for sse)', () => {
|
||||
const config = {
|
||||
name: 'test-piece',
|
||||
movements: [
|
||||
{
|
||||
name: 'step1',
|
||||
persona: 'coder',
|
||||
mcp_servers: {
|
||||
broken: { type: 'sse' },
|
||||
},
|
||||
instruction: '{task}',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
expect(() => PieceConfigRawSchema.parse(config)).toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('McpServerConfigSchema', () => {
|
||||
it('should parse stdio config', () => {
|
||||
const config = { command: 'npx', args: ['-y', 'some-server'], env: { NODE_ENV: 'test' } };
|
||||
const result = McpServerConfigSchema.parse(config);
|
||||
expect(result).toEqual(config);
|
||||
});
|
||||
|
||||
it('should parse stdio config with command only', () => {
|
||||
const config = { command: 'mcp-server' };
|
||||
const result = McpServerConfigSchema.parse(config);
|
||||
expect(result).toEqual(config);
|
||||
});
|
||||
|
||||
it('should parse stdio config with explicit type', () => {
|
||||
const config = { type: 'stdio' as const, command: 'npx', args: ['-y', 'some-server'] };
|
||||
const result = McpServerConfigSchema.parse(config);
|
||||
expect(result).toEqual(config);
|
||||
});
|
||||
|
||||
it('should parse sse config', () => {
|
||||
const config = { type: 'sse' as const, url: 'http://localhost:8080/sse' };
|
||||
const result = McpServerConfigSchema.parse(config);
|
||||
expect(result).toEqual(config);
|
||||
});
|
||||
|
||||
it('should parse sse config with headers', () => {
|
||||
const config = { type: 'sse' as const, url: 'http://example.com', headers: { 'X-Key': 'val' } };
|
||||
const result = McpServerConfigSchema.parse(config);
|
||||
expect(result).toEqual(config);
|
||||
});
|
||||
|
||||
it('should parse http config', () => {
|
||||
const config = { type: 'http' as const, url: 'http://localhost:3000/mcp' };
|
||||
const result = McpServerConfigSchema.parse(config);
|
||||
expect(result).toEqual(config);
|
||||
});
|
||||
|
||||
it('should parse http config with headers', () => {
|
||||
const config = { type: 'http' as const, url: 'http://example.com', headers: { Authorization: 'Bearer x' } };
|
||||
const result = McpServerConfigSchema.parse(config);
|
||||
expect(result).toEqual(config);
|
||||
});
|
||||
|
||||
it('should reject empty command for stdio', () => {
|
||||
expect(() => McpServerConfigSchema.parse({ command: '' })).toThrow();
|
||||
});
|
||||
|
||||
it('should reject missing url for sse', () => {
|
||||
expect(() => McpServerConfigSchema.parse({ type: 'sse' })).toThrow();
|
||||
});
|
||||
|
||||
it('should reject missing url for http', () => {
|
||||
expect(() => McpServerConfigSchema.parse({ type: 'http' })).toThrow();
|
||||
});
|
||||
|
||||
it('should reject empty url for sse', () => {
|
||||
expect(() => McpServerConfigSchema.parse({ type: 'sse', url: '' })).toThrow();
|
||||
});
|
||||
|
||||
it('should reject unknown type', () => {
|
||||
expect(() => McpServerConfigSchema.parse({ type: 'websocket', url: 'ws://localhost' })).toThrow();
|
||||
});
|
||||
|
||||
it('should reject empty object', () => {
|
||||
expect(() => McpServerConfigSchema.parse({})).toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('CustomAgentConfigSchema', () => {
|
||||
|
||||
@ -3,9 +3,10 @@
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import { SdkOptionsBuilder } from '../infra/claude/options-builder.js';
|
||||
import { SdkOptionsBuilder, buildSdkOptions } from '../infra/claude/options-builder.js';
|
||||
import { mapToCodexSandboxMode } from '../infra/codex/types.js';
|
||||
import type { PermissionMode } from '../core/models/index.js';
|
||||
import type { ClaudeSpawnOptions } from '../infra/claude/types.js';
|
||||
|
||||
describe('SdkOptionsBuilder.mapToSdkPermissionMode', () => {
|
||||
it('should map readonly to SDK default', () => {
|
||||
@ -52,3 +53,53 @@ describe('mapToCodexSandboxMode', () => {
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
describe('SdkOptionsBuilder.build() — mcpServers', () => {
|
||||
it('should include mcpServers in SDK options when provided', () => {
|
||||
const spawnOptions: ClaudeSpawnOptions = {
|
||||
cwd: '/tmp/test',
|
||||
mcpServers: {
|
||||
playwright: {
|
||||
command: 'npx',
|
||||
args: ['-y', '@anthropic-ai/mcp-server-playwright'],
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
const sdkOptions = buildSdkOptions(spawnOptions);
|
||||
expect(sdkOptions.mcpServers).toEqual({
|
||||
playwright: {
|
||||
command: 'npx',
|
||||
args: ['-y', '@anthropic-ai/mcp-server-playwright'],
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should not include mcpServers in SDK options when not provided', () => {
|
||||
const spawnOptions: ClaudeSpawnOptions = {
|
||||
cwd: '/tmp/test',
|
||||
};
|
||||
|
||||
const sdkOptions = buildSdkOptions(spawnOptions);
|
||||
expect(sdkOptions).not.toHaveProperty('mcpServers');
|
||||
});
|
||||
|
||||
it('should include mcpServers alongside other options', () => {
|
||||
const spawnOptions: ClaudeSpawnOptions = {
|
||||
cwd: '/tmp/test',
|
||||
allowedTools: ['Read', 'mcp__playwright__*'],
|
||||
mcpServers: {
|
||||
playwright: {
|
||||
command: 'npx',
|
||||
args: ['-y', '@anthropic-ai/mcp-server-playwright'],
|
||||
},
|
||||
},
|
||||
permissionMode: 'edit',
|
||||
};
|
||||
|
||||
const sdkOptions = buildSdkOptions(spawnOptions);
|
||||
expect(sdkOptions.mcpServers).toBeDefined();
|
||||
expect(sdkOptions.allowedTools).toEqual(['Read', 'mcp__playwright__*']);
|
||||
expect(sdkOptions.permissionMode).toBe('acceptEdits');
|
||||
});
|
||||
});
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
/**
|
||||
* Tests for getPieceDescription and buildWorkflowString
|
||||
* Tests for getPieceDescription, buildWorkflowString, and buildMovementPreviews
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { mkdtempSync, writeFileSync, mkdirSync, rmSync } from 'node:fs';
|
||||
import { mkdtempSync, writeFileSync, mkdirSync, rmSync, chmodSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { getPieceDescription } from '../infra/config/loaders/pieceResolver.js';
|
||||
@ -49,6 +49,7 @@ movements:
|
||||
expect(result.pieceStructure).toBe(
|
||||
'1. plan (タスク計画)\n2. implement (実装)\n3. review'
|
||||
);
|
||||
expect(result.movementPreviews).toEqual([]);
|
||||
});
|
||||
|
||||
it('should return workflow structure with parallel movements', () => {
|
||||
@ -91,6 +92,7 @@ movements:
|
||||
' - arch_review\n' +
|
||||
'3. fix (修正)'
|
||||
);
|
||||
expect(result.movementPreviews).toEqual([]);
|
||||
});
|
||||
|
||||
it('should handle movements without descriptions', () => {
|
||||
@ -115,6 +117,7 @@ movements:
|
||||
expect(result.name).toBe('minimal');
|
||||
expect(result.description).toBe('');
|
||||
expect(result.pieceStructure).toBe('1. step1\n2. step2');
|
||||
expect(result.movementPreviews).toEqual([]);
|
||||
});
|
||||
|
||||
it('should return empty strings when piece is not found', () => {
|
||||
@ -123,6 +126,7 @@ movements:
|
||||
expect(result.name).toBe('nonexistent');
|
||||
expect(result.description).toBe('');
|
||||
expect(result.pieceStructure).toBe('');
|
||||
expect(result.movementPreviews).toEqual([]);
|
||||
});
|
||||
|
||||
it('should handle parallel movements without descriptions', () => {
|
||||
@ -151,5 +155,411 @@ movements:
|
||||
' - child1\n' +
|
||||
' - child2'
|
||||
);
|
||||
expect(result.movementPreviews).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getPieceDescription with movementPreviews', () => {
|
||||
let tempDir: string;
|
||||
|
||||
beforeEach(() => {
|
||||
tempDir = mkdtempSync(join(tmpdir(), 'takt-test-previews-'));
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(tempDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('should return movement previews when previewCount is specified', () => {
|
||||
const pieceYaml = `name: preview-test
|
||||
description: Test piece
|
||||
initial_movement: plan
|
||||
max_iterations: 5
|
||||
|
||||
movements:
|
||||
- name: plan
|
||||
description: Planning
|
||||
persona: Plan the task
|
||||
instruction: "Create a plan for {task}"
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Glob
|
||||
rules:
|
||||
- condition: plan complete
|
||||
next: implement
|
||||
- name: implement
|
||||
description: Implementation
|
||||
persona: Implement the code
|
||||
instruction: "Implement according to plan"
|
||||
edit: true
|
||||
allowed_tools:
|
||||
- Read
|
||||
- Edit
|
||||
- Bash
|
||||
rules:
|
||||
- condition: done
|
||||
next: review
|
||||
- name: review
|
||||
persona: Review the code
|
||||
instruction: "Review changes"
|
||||
rules:
|
||||
- condition: approved
|
||||
next: COMPLETE
|
||||
`;
|
||||
|
||||
const piecePath = join(tempDir, 'preview-test.yaml');
|
||||
writeFileSync(piecePath, pieceYaml);
|
||||
|
||||
const result = getPieceDescription(piecePath, tempDir, 3);
|
||||
|
||||
expect(result.movementPreviews).toHaveLength(3);
|
||||
|
||||
// First movement: plan
|
||||
expect(result.movementPreviews[0].name).toBe('plan');
|
||||
expect(result.movementPreviews[0].personaContent).toBe('Plan the task');
|
||||
expect(result.movementPreviews[0].instructionContent).toBe('Create a plan for {task}');
|
||||
expect(result.movementPreviews[0].allowedTools).toEqual(['Read', 'Glob']);
|
||||
expect(result.movementPreviews[0].canEdit).toBe(false);
|
||||
|
||||
// Second movement: implement
|
||||
expect(result.movementPreviews[1].name).toBe('implement');
|
||||
expect(result.movementPreviews[1].personaContent).toBe('Implement the code');
|
||||
expect(result.movementPreviews[1].instructionContent).toBe('Implement according to plan');
|
||||
expect(result.movementPreviews[1].allowedTools).toEqual(['Read', 'Edit', 'Bash']);
|
||||
expect(result.movementPreviews[1].canEdit).toBe(true);
|
||||
|
||||
// Third movement: review
|
||||
expect(result.movementPreviews[2].name).toBe('review');
|
||||
expect(result.movementPreviews[2].personaContent).toBe('Review the code');
|
||||
expect(result.movementPreviews[2].canEdit).toBe(false);
|
||||
});
|
||||
|
||||
it('should return empty previews when previewCount is 0', () => {
|
||||
const pieceYaml = `name: test
|
||||
initial_movement: step1
|
||||
max_iterations: 1
|
||||
|
||||
movements:
|
||||
- name: step1
|
||||
persona: agent
|
||||
instruction: "Do step1"
|
||||
`;
|
||||
|
||||
const piecePath = join(tempDir, 'test.yaml');
|
||||
writeFileSync(piecePath, pieceYaml);
|
||||
|
||||
const result = getPieceDescription(piecePath, tempDir, 0);
|
||||
|
||||
expect(result.movementPreviews).toEqual([]);
|
||||
});
|
||||
|
||||
it('should return empty previews when previewCount is not specified', () => {
|
||||
const pieceYaml = `name: test
|
||||
initial_movement: step1
|
||||
max_iterations: 1
|
||||
|
||||
movements:
|
||||
- name: step1
|
||||
persona: agent
|
||||
instruction: "Do step1"
|
||||
`;
|
||||
|
||||
const piecePath = join(tempDir, 'test.yaml');
|
||||
writeFileSync(piecePath, pieceYaml);
|
||||
|
||||
const result = getPieceDescription(piecePath, tempDir);
|
||||
|
||||
expect(result.movementPreviews).toEqual([]);
|
||||
});
|
||||
|
||||
it('should stop at COMPLETE movement', () => {
|
||||
const pieceYaml = `name: test-complete
|
||||
initial_movement: step1
|
||||
max_iterations: 3
|
||||
|
||||
movements:
|
||||
- name: step1
|
||||
persona: agent1
|
||||
instruction: "Step 1"
|
||||
rules:
|
||||
- condition: done
|
||||
next: COMPLETE
|
||||
- name: step2
|
||||
persona: agent2
|
||||
instruction: "Step 2"
|
||||
`;
|
||||
|
||||
const piecePath = join(tempDir, 'test-complete.yaml');
|
||||
writeFileSync(piecePath, pieceYaml);
|
||||
|
||||
const result = getPieceDescription(piecePath, tempDir, 5);
|
||||
|
||||
expect(result.movementPreviews).toHaveLength(1);
|
||||
expect(result.movementPreviews[0].name).toBe('step1');
|
||||
});
|
||||
|
||||
it('should stop at ABORT movement', () => {
|
||||
const pieceYaml = `name: test-abort
|
||||
initial_movement: step1
|
||||
max_iterations: 3
|
||||
|
||||
movements:
|
||||
- name: step1
|
||||
persona: agent1
|
||||
instruction: "Step 1"
|
||||
rules:
|
||||
- condition: abort
|
||||
next: ABORT
|
||||
- name: step2
|
||||
persona: agent2
|
||||
instruction: "Step 2"
|
||||
`;
|
||||
|
||||
const piecePath = join(tempDir, 'test-abort.yaml');
|
||||
writeFileSync(piecePath, pieceYaml);
|
||||
|
||||
const result = getPieceDescription(piecePath, tempDir, 5);
|
||||
|
||||
expect(result.movementPreviews).toHaveLength(1);
|
||||
expect(result.movementPreviews[0].name).toBe('step1');
|
||||
});
|
||||
|
||||
it('should read persona content from file when personaPath is set', () => {
|
||||
const personaContent = '# Planner Persona\nYou are a planning expert.';
|
||||
const personaPath = join(tempDir, 'planner.md');
|
||||
writeFileSync(personaPath, personaContent);
|
||||
|
||||
const pieceYaml = `name: test-persona-file
|
||||
initial_movement: plan
|
||||
max_iterations: 1
|
||||
|
||||
personas:
|
||||
planner: ./planner.md
|
||||
|
||||
movements:
|
||||
- name: plan
|
||||
persona: planner
|
||||
instruction: "Plan the task"
|
||||
`;
|
||||
|
||||
const piecePath = join(tempDir, 'test-persona-file.yaml');
|
||||
writeFileSync(piecePath, pieceYaml);
|
||||
|
||||
const result = getPieceDescription(piecePath, tempDir, 1);
|
||||
|
||||
expect(result.movementPreviews).toHaveLength(1);
|
||||
expect(result.movementPreviews[0].name).toBe('plan');
|
||||
expect(result.movementPreviews[0].personaContent).toBe(personaContent);
|
||||
});
|
||||
|
||||
it('should limit previews to maxCount', () => {
|
||||
const pieceYaml = `name: test-limit
|
||||
initial_movement: step1
|
||||
max_iterations: 5
|
||||
|
||||
movements:
|
||||
- name: step1
|
||||
persona: agent1
|
||||
instruction: "Step 1"
|
||||
rules:
|
||||
- condition: done
|
||||
next: step2
|
||||
- name: step2
|
||||
persona: agent2
|
||||
instruction: "Step 2"
|
||||
rules:
|
||||
- condition: done
|
||||
next: step3
|
||||
- name: step3
|
||||
persona: agent3
|
||||
instruction: "Step 3"
|
||||
`;
|
||||
|
||||
const piecePath = join(tempDir, 'test-limit.yaml');
|
||||
writeFileSync(piecePath, pieceYaml);
|
||||
|
||||
const result = getPieceDescription(piecePath, tempDir, 2);
|
||||
|
||||
expect(result.movementPreviews).toHaveLength(2);
|
||||
expect(result.movementPreviews[0].name).toBe('step1');
|
||||
expect(result.movementPreviews[1].name).toBe('step2');
|
||||
});
|
||||
|
||||
it('should handle movements without rules (stop after first)', () => {
|
||||
const pieceYaml = `name: test-no-rules
|
||||
initial_movement: step1
|
||||
max_iterations: 3
|
||||
|
||||
movements:
|
||||
- name: step1
|
||||
persona: agent1
|
||||
instruction: "Step 1"
|
||||
- name: step2
|
||||
persona: agent2
|
||||
instruction: "Step 2"
|
||||
`;
|
||||
|
||||
const piecePath = join(tempDir, 'test-no-rules.yaml');
|
||||
writeFileSync(piecePath, pieceYaml);
|
||||
|
||||
const result = getPieceDescription(piecePath, tempDir, 3);
|
||||
|
||||
expect(result.movementPreviews).toHaveLength(1);
|
||||
expect(result.movementPreviews[0].name).toBe('step1');
|
||||
});
|
||||
|
||||
it('should return empty previews when initial movement not found in list', () => {
|
||||
const pieceYaml = `name: test-missing-initial
|
||||
initial_movement: nonexistent
|
||||
max_iterations: 1
|
||||
|
||||
movements:
|
||||
- name: step1
|
||||
persona: agent
|
||||
instruction: "Do something"
|
||||
`;
|
||||
|
||||
const piecePath = join(tempDir, 'test-missing-initial.yaml');
|
||||
writeFileSync(piecePath, pieceYaml);
|
||||
|
||||
const result = getPieceDescription(piecePath, tempDir, 3);
|
||||
|
||||
expect(result.movementPreviews).toEqual([]);
|
||||
});
|
||||
|
||||
it('should handle self-referencing rule (prevent infinite loop)', () => {
|
||||
const pieceYaml = `name: test-self-ref
|
||||
initial_movement: step1
|
||||
max_iterations: 5
|
||||
|
||||
movements:
|
||||
- name: step1
|
||||
persona: agent1
|
||||
instruction: "Step 1"
|
||||
rules:
|
||||
- condition: loop
|
||||
next: step1
|
||||
`;
|
||||
|
||||
const piecePath = join(tempDir, 'test-self-ref.yaml');
|
||||
writeFileSync(piecePath, pieceYaml);
|
||||
|
||||
const result = getPieceDescription(piecePath, tempDir, 5);
|
||||
|
||||
expect(result.movementPreviews).toHaveLength(1);
|
||||
expect(result.movementPreviews[0].name).toBe('step1');
|
||||
});
|
||||
|
||||
it('should handle multi-node cycle A→B→A (prevent duplicate previews)', () => {
|
||||
const pieceYaml = `name: test-cycle
|
||||
initial_movement: stepA
|
||||
max_iterations: 10
|
||||
|
||||
movements:
|
||||
- name: stepA
|
||||
persona: agentA
|
||||
instruction: "Step A"
|
||||
rules:
|
||||
- condition: next
|
||||
next: stepB
|
||||
- name: stepB
|
||||
persona: agentB
|
||||
instruction: "Step B"
|
||||
rules:
|
||||
- condition: back
|
||||
next: stepA
|
||||
`;
|
||||
|
||||
const piecePath = join(tempDir, 'test-cycle.yaml');
|
||||
writeFileSync(piecePath, pieceYaml);
|
||||
|
||||
const result = getPieceDescription(piecePath, tempDir, 10);
|
||||
|
||||
expect(result.movementPreviews).toHaveLength(2);
|
||||
expect(result.movementPreviews[0].name).toBe('stepA');
|
||||
expect(result.movementPreviews[1].name).toBe('stepB');
|
||||
});
|
||||
|
||||
it('should return empty movementPreviews when piece is not found', () => {
|
||||
const result = getPieceDescription('nonexistent', tempDir, 3);
|
||||
|
||||
expect(result.movementPreviews).toEqual([]);
|
||||
});
|
||||
|
||||
it('should use inline persona content when no personaPath', () => {
|
||||
const pieceYaml = `name: test-inline
|
||||
initial_movement: step1
|
||||
max_iterations: 1
|
||||
|
||||
movements:
|
||||
- name: step1
|
||||
persona: You are an inline persona
|
||||
instruction: "Do something"
|
||||
`;
|
||||
|
||||
const piecePath = join(tempDir, 'test-inline.yaml');
|
||||
writeFileSync(piecePath, pieceYaml);
|
||||
|
||||
const result = getPieceDescription(piecePath, tempDir, 1);
|
||||
|
||||
expect(result.movementPreviews).toHaveLength(1);
|
||||
expect(result.movementPreviews[0].personaContent).toBe('You are an inline persona');
|
||||
});
|
||||
|
||||
it('should fallback to empty personaContent when personaPath file becomes unreadable', () => {
|
||||
// Create the persona file so it passes existsSync during parsing
|
||||
const personaPath = join(tempDir, 'unreadable-persona.md');
|
||||
writeFileSync(personaPath, '# Persona content');
|
||||
// Make the file unreadable so readFileSync fails in buildMovementPreviews
|
||||
chmodSync(personaPath, 0o000);
|
||||
|
||||
const pieceYaml = `name: test-unreadable-persona
|
||||
initial_movement: plan
|
||||
max_iterations: 1
|
||||
|
||||
personas:
|
||||
planner: ./unreadable-persona.md
|
||||
|
||||
movements:
|
||||
- name: plan
|
||||
persona: planner
|
||||
instruction: "Plan the task"
|
||||
`;
|
||||
|
||||
const piecePath = join(tempDir, 'test-unreadable-persona.yaml');
|
||||
writeFileSync(piecePath, pieceYaml);
|
||||
|
||||
try {
|
||||
const result = getPieceDescription(piecePath, tempDir, 1);
|
||||
|
||||
expect(result.movementPreviews).toHaveLength(1);
|
||||
expect(result.movementPreviews[0].name).toBe('plan');
|
||||
expect(result.movementPreviews[0].personaContent).toBe('');
|
||||
expect(result.movementPreviews[0].instructionContent).toBe('Plan the task');
|
||||
} finally {
|
||||
// Restore permissions so cleanup can remove the file
|
||||
chmodSync(personaPath, 0o644);
|
||||
}
|
||||
});
|
||||
|
||||
it('should include personaDisplayName in previews', () => {
|
||||
const pieceYaml = `name: test-display
|
||||
initial_movement: step1
|
||||
max_iterations: 1
|
||||
|
||||
movements:
|
||||
- name: step1
|
||||
persona: agent
|
||||
persona_name: Custom Agent Name
|
||||
instruction: "Do something"
|
||||
`;
|
||||
|
||||
const piecePath = join(tempDir, 'test-display.yaml');
|
||||
writeFileSync(piecePath, pieceYaml);
|
||||
|
||||
const result = getPieceDescription(piecePath, tempDir, 1);
|
||||
|
||||
expect(result.movementPreviews).toHaveLength(1);
|
||||
expect(result.movementPreviews[0].personaDisplayName).toBe('Custom Agent Name');
|
||||
});
|
||||
});
|
||||
|
||||
@ -218,6 +218,37 @@ describe('executePipeline', () => {
|
||||
);
|
||||
});
|
||||
|
||||
it('should pass baseBranch as base to createPullRequest', async () => {
|
||||
// Given: getCurrentBranch returns 'develop' before branch creation
|
||||
mockExecFileSync.mockImplementation((_cmd: string, args: string[]) => {
|
||||
if (args[0] === 'rev-parse' && args[1] === '--abbrev-ref') {
|
||||
return 'develop\n';
|
||||
}
|
||||
return 'abc1234\n';
|
||||
});
|
||||
mockExecuteTask.mockResolvedValueOnce(true);
|
||||
mockCreatePullRequest.mockReturnValueOnce({ success: true, url: 'https://github.com/test/pr/1' });
|
||||
|
||||
// When
|
||||
const exitCode = await executePipeline({
|
||||
task: 'Fix the bug',
|
||||
piece: 'default',
|
||||
branch: 'fix/my-branch',
|
||||
autoPr: true,
|
||||
cwd: '/tmp/test',
|
||||
});
|
||||
|
||||
// Then
|
||||
expect(exitCode).toBe(0);
|
||||
expect(mockCreatePullRequest).toHaveBeenCalledWith(
|
||||
'/tmp/test',
|
||||
expect.objectContaining({
|
||||
branch: 'fix/my-branch',
|
||||
base: 'develop',
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should use --task when both --task and positional task are provided', async () => {
|
||||
mockExecuteTask.mockResolvedValueOnce(true);
|
||||
|
||||
|
||||
436
src/__tests__/runAllTasks-concurrency.test.ts
Normal file
436
src/__tests__/runAllTasks-concurrency.test.ts
Normal file
@ -0,0 +1,436 @@
|
||||
/**
|
||||
* Tests for runAllTasks concurrency support (worker pool)
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import type { TaskInfo } from '../infra/task/index.js';
|
||||
|
||||
// Mock dependencies before importing the module under test
|
||||
vi.mock('../infra/config/index.js', () => ({
|
||||
loadPieceByIdentifier: vi.fn(),
|
||||
isPiecePath: vi.fn(() => false),
|
||||
loadGlobalConfig: vi.fn(() => ({
|
||||
language: 'en',
|
||||
defaultPiece: 'default',
|
||||
logLevel: 'info',
|
||||
concurrency: 1,
|
||||
})),
|
||||
}));
|
||||
|
||||
import { loadGlobalConfig } from '../infra/config/index.js';
|
||||
const mockLoadGlobalConfig = vi.mocked(loadGlobalConfig);
|
||||
|
||||
const mockGetNextTask = vi.fn();
|
||||
const mockClaimNextTasks = vi.fn();
|
||||
const mockCompleteTask = vi.fn();
|
||||
const mockFailTask = vi.fn();
|
||||
|
||||
vi.mock('../infra/task/index.js', async (importOriginal) => ({
|
||||
...(await importOriginal<Record<string, unknown>>()),
|
||||
TaskRunner: vi.fn().mockImplementation(() => ({
|
||||
getNextTask: mockGetNextTask,
|
||||
claimNextTasks: mockClaimNextTasks,
|
||||
completeTask: mockCompleteTask,
|
||||
failTask: mockFailTask,
|
||||
})),
|
||||
}));
|
||||
|
||||
vi.mock('../infra/task/clone.js', async (importOriginal) => ({
|
||||
...(await importOriginal<Record<string, unknown>>()),
|
||||
createSharedClone: vi.fn(),
|
||||
removeClone: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../infra/task/git.js', async (importOriginal) => ({
|
||||
...(await importOriginal<Record<string, unknown>>()),
|
||||
getCurrentBranch: vi.fn(() => 'main'),
|
||||
}));
|
||||
|
||||
vi.mock('../infra/task/autoCommit.js', async (importOriginal) => ({
|
||||
...(await importOriginal<Record<string, unknown>>()),
|
||||
autoCommitAndPush: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../infra/task/summarize.js', async (importOriginal) => ({
|
||||
...(await importOriginal<Record<string, unknown>>()),
|
||||
summarizeTaskName: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/ui/index.js', () => ({
|
||||
header: vi.fn(),
|
||||
info: vi.fn(),
|
||||
warn: vi.fn(),
|
||||
error: vi.fn(),
|
||||
success: vi.fn(),
|
||||
status: vi.fn(),
|
||||
blankLine: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
||||
...(await importOriginal<Record<string, unknown>>()),
|
||||
createLogger: () => ({
|
||||
info: vi.fn(),
|
||||
debug: vi.fn(),
|
||||
error: vi.fn(),
|
||||
}),
|
||||
getErrorMessage: vi.fn((e) => e.message),
|
||||
}));
|
||||
|
||||
vi.mock('../features/tasks/execute/pieceExecution.js', () => ({
|
||||
executePiece: vi.fn(() => Promise.resolve({ success: true })),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/context.js', () => ({
|
||||
isQuietMode: vi.fn(() => false),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/constants.js', () => ({
|
||||
DEFAULT_PIECE_NAME: 'default',
|
||||
DEFAULT_LANGUAGE: 'en',
|
||||
}));
|
||||
|
||||
vi.mock('../infra/github/index.js', () => ({
|
||||
createPullRequest: vi.fn(),
|
||||
buildPrBody: vi.fn(),
|
||||
pushBranch: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../infra/claude/index.js', () => ({
|
||||
interruptAllQueries: vi.fn(),
|
||||
callAiJudge: vi.fn(),
|
||||
detectRuleIndex: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/exitCodes.js', () => ({
|
||||
EXIT_SIGINT: 130,
|
||||
}));
|
||||
|
||||
vi.mock('../shared/i18n/index.js', () => ({
|
||||
getLabel: vi.fn((key: string) => key),
|
||||
}));
|
||||
|
||||
import { info, header, status, success, error as errorFn } from '../shared/ui/index.js';
|
||||
import { runAllTasks } from '../features/tasks/index.js';
|
||||
import { executePiece } from '../features/tasks/execute/pieceExecution.js';
|
||||
import { loadPieceByIdentifier } from '../infra/config/index.js';
|
||||
|
||||
const mockInfo = vi.mocked(info);
|
||||
const mockHeader = vi.mocked(header);
|
||||
const mockStatus = vi.mocked(status);
|
||||
const mockSuccess = vi.mocked(success);
|
||||
const mockError = vi.mocked(errorFn);
|
||||
const mockExecutePiece = vi.mocked(executePiece);
|
||||
const mockLoadPieceByIdentifier = vi.mocked(loadPieceByIdentifier);
|
||||
|
||||
function createTask(name: string): TaskInfo {
|
||||
return {
|
||||
name,
|
||||
content: `Task: ${name}`,
|
||||
filePath: `/tasks/${name}.yaml`,
|
||||
};
|
||||
}
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
describe('runAllTasks concurrency', () => {
|
||||
describe('sequential execution (concurrency=1)', () => {
|
||||
beforeEach(() => {
|
||||
mockLoadGlobalConfig.mockReturnValue({
|
||||
language: 'en',
|
||||
defaultPiece: 'default',
|
||||
logLevel: 'info',
|
||||
concurrency: 1,
|
||||
});
|
||||
});
|
||||
|
||||
it('should show no-tasks message when no tasks exist', async () => {
|
||||
// Given: No pending tasks
|
||||
mockClaimNextTasks.mockReturnValue([]);
|
||||
|
||||
// When
|
||||
await runAllTasks('/project');
|
||||
|
||||
// Then
|
||||
expect(mockInfo).toHaveBeenCalledWith('No pending tasks in .takt/tasks/');
|
||||
});
|
||||
|
||||
it('should execute tasks sequentially via worker pool when concurrency is 1', async () => {
|
||||
// Given: Two tasks available sequentially
|
||||
const task1 = createTask('task-1');
|
||||
const task2 = createTask('task-2');
|
||||
|
||||
mockClaimNextTasks
|
||||
.mockReturnValueOnce([task1])
|
||||
.mockReturnValueOnce([task2])
|
||||
.mockReturnValueOnce([]);
|
||||
|
||||
// When
|
||||
await runAllTasks('/project');
|
||||
|
||||
// Then: Worker pool uses claimNextTasks for fetching more tasks
|
||||
expect(mockClaimNextTasks).toHaveBeenCalled();
|
||||
expect(mockStatus).toHaveBeenCalledWith('Total', '2');
|
||||
});
|
||||
});
|
||||
|
||||
describe('parallel execution (concurrency>1)', () => {
|
||||
beforeEach(() => {
|
||||
mockLoadGlobalConfig.mockReturnValue({
|
||||
language: 'en',
|
||||
defaultPiece: 'default',
|
||||
logLevel: 'info',
|
||||
concurrency: 3,
|
||||
});
|
||||
});
|
||||
|
||||
it('should display concurrency info when concurrency > 1', async () => {
|
||||
// Given: Tasks available
|
||||
const task1 = createTask('task-1');
|
||||
mockClaimNextTasks
|
||||
.mockReturnValueOnce([task1])
|
||||
.mockReturnValueOnce([]);
|
||||
|
||||
// When
|
||||
await runAllTasks('/project');
|
||||
|
||||
// Then
|
||||
expect(mockInfo).toHaveBeenCalledWith('Concurrency: 3');
|
||||
});
|
||||
|
||||
it('should execute tasks using worker pool when concurrency > 1', async () => {
|
||||
// Given: 3 tasks available
|
||||
const task1 = createTask('task-1');
|
||||
const task2 = createTask('task-2');
|
||||
const task3 = createTask('task-3');
|
||||
|
||||
mockClaimNextTasks
|
||||
.mockReturnValueOnce([task1, task2, task3])
|
||||
.mockReturnValueOnce([]);
|
||||
|
||||
// When
|
||||
await runAllTasks('/project');
|
||||
|
||||
// Then: Task names displayed
|
||||
expect(mockInfo).toHaveBeenCalledWith('=== Task: task-1 ===');
|
||||
expect(mockInfo).toHaveBeenCalledWith('=== Task: task-2 ===');
|
||||
expect(mockInfo).toHaveBeenCalledWith('=== Task: task-3 ===');
|
||||
expect(mockStatus).toHaveBeenCalledWith('Total', '3');
|
||||
});
|
||||
|
||||
it('should fill slots as tasks complete (worker pool behavior)', async () => {
|
||||
// Given: 5 tasks, concurrency=3
|
||||
// Worker pool should start 3, then fill slots as tasks complete
|
||||
const tasks = Array.from({ length: 5 }, (_, i) => createTask(`task-${i + 1}`));
|
||||
|
||||
mockClaimNextTasks
|
||||
.mockReturnValueOnce(tasks.slice(0, 3))
|
||||
.mockReturnValueOnce(tasks.slice(3, 5))
|
||||
.mockReturnValueOnce([]);
|
||||
|
||||
// When
|
||||
await runAllTasks('/project');
|
||||
|
||||
// Then: All 5 tasks executed
|
||||
expect(mockStatus).toHaveBeenCalledWith('Total', '5');
|
||||
});
|
||||
});
|
||||
|
||||
describe('default concurrency', () => {
|
||||
it('should default to sequential when concurrency is not set', async () => {
|
||||
// Given: Config without explicit concurrency (defaults to 1)
|
||||
mockLoadGlobalConfig.mockReturnValue({
|
||||
language: 'en',
|
||||
defaultPiece: 'default',
|
||||
logLevel: 'info',
|
||||
concurrency: 1,
|
||||
});
|
||||
|
||||
const task1 = createTask('task-1');
|
||||
mockClaimNextTasks
|
||||
.mockReturnValueOnce([task1])
|
||||
.mockReturnValueOnce([]);
|
||||
|
||||
// When
|
||||
await runAllTasks('/project');
|
||||
|
||||
// Then: No concurrency info displayed
|
||||
const concurrencyInfoCalls = mockInfo.mock.calls.filter(
|
||||
(call) => typeof call[0] === 'string' && call[0].startsWith('Concurrency:')
|
||||
);
|
||||
expect(concurrencyInfoCalls).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('parallel execution behavior', () => {
|
||||
const fakePieceConfig = {
|
||||
name: 'default',
|
||||
movements: [{ name: 'implement', personaDisplayName: 'coder' }],
|
||||
initialMovement: 'implement',
|
||||
maxIterations: 10,
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
mockLoadGlobalConfig.mockReturnValue({
|
||||
language: 'en',
|
||||
defaultPiece: 'default',
|
||||
logLevel: 'info',
|
||||
concurrency: 3,
|
||||
});
|
||||
// Return a valid piece config so executeTask reaches executePiece
|
||||
mockLoadPieceByIdentifier.mockReturnValue(fakePieceConfig as never);
|
||||
});
|
||||
|
||||
it('should run tasks concurrently, not sequentially', async () => {
|
||||
// Given: 2 tasks with delayed execution to verify concurrency
|
||||
const task1 = createTask('slow-1');
|
||||
const task2 = createTask('slow-2');
|
||||
|
||||
const executionOrder: string[] = [];
|
||||
|
||||
// Each task takes 50ms — if sequential, total > 100ms; if parallel, total ~50ms
|
||||
mockExecutePiece.mockImplementation((_config, task) => {
|
||||
executionOrder.push(`start:${task}`);
|
||||
return new Promise((resolve) => {
|
||||
setTimeout(() => {
|
||||
executionOrder.push(`end:${task}`);
|
||||
resolve({ success: true });
|
||||
}, 50);
|
||||
});
|
||||
});
|
||||
|
||||
mockClaimNextTasks
|
||||
.mockReturnValueOnce([task1, task2])
|
||||
.mockReturnValueOnce([]);
|
||||
|
||||
// When
|
||||
const startTime = Date.now();
|
||||
await runAllTasks('/project');
|
||||
const elapsed = Date.now() - startTime;
|
||||
|
||||
// Then: Both tasks started before either completed (concurrent execution)
|
||||
expect(executionOrder[0]).toBe('start:Task: slow-1');
|
||||
expect(executionOrder[1]).toBe('start:Task: slow-2');
|
||||
// Elapsed time should be closer to 50ms than 100ms (allowing margin for CI)
|
||||
expect(elapsed).toBeLessThan(150);
|
||||
});
|
||||
|
||||
it('should fill slots immediately when a task completes (no batch waiting)', async () => {
|
||||
// Given: 3 tasks, concurrency=2, task1 finishes quickly, task2 takes longer
|
||||
mockLoadGlobalConfig.mockReturnValue({
|
||||
language: 'en',
|
||||
defaultPiece: 'default',
|
||||
logLevel: 'info',
|
||||
concurrency: 2,
|
||||
});
|
||||
|
||||
const task1 = createTask('fast');
|
||||
const task2 = createTask('slow');
|
||||
const task3 = createTask('after-fast');
|
||||
|
||||
const executionOrder: string[] = [];
|
||||
|
||||
mockExecutePiece.mockImplementation((_config, task) => {
|
||||
executionOrder.push(`start:${task}`);
|
||||
const delay = (task as string).includes('slow') ? 80 : 20;
|
||||
return new Promise((resolve) => {
|
||||
setTimeout(() => {
|
||||
executionOrder.push(`end:${task}`);
|
||||
resolve({ success: true });
|
||||
}, delay);
|
||||
});
|
||||
});
|
||||
|
||||
mockClaimNextTasks
|
||||
.mockReturnValueOnce([task1, task2])
|
||||
.mockReturnValueOnce([task3])
|
||||
.mockReturnValueOnce([]);
|
||||
|
||||
// When
|
||||
await runAllTasks('/project');
|
||||
|
||||
// Then: task3 starts before task2 finishes (slot filled immediately)
|
||||
const task3StartIdx = executionOrder.indexOf('start:Task: after-fast');
|
||||
const task2EndIdx = executionOrder.indexOf('end:Task: slow');
|
||||
expect(task3StartIdx).toBeLessThan(task2EndIdx);
|
||||
expect(mockStatus).toHaveBeenCalledWith('Total', '3');
|
||||
});
|
||||
|
||||
it('should count partial failures correctly', async () => {
|
||||
// Given: 3 tasks, 1 fails, 2 succeed
|
||||
const task1 = createTask('pass-1');
|
||||
const task2 = createTask('fail-1');
|
||||
const task3 = createTask('pass-2');
|
||||
|
||||
let callIndex = 0;
|
||||
mockExecutePiece.mockImplementation(() => {
|
||||
callIndex++;
|
||||
// Second call fails
|
||||
return Promise.resolve({ success: callIndex !== 2 });
|
||||
});
|
||||
|
||||
mockClaimNextTasks
|
||||
.mockReturnValueOnce([task1, task2, task3])
|
||||
.mockReturnValueOnce([]);
|
||||
|
||||
// When
|
||||
await runAllTasks('/project');
|
||||
|
||||
// Then: Correct success/fail counts
|
||||
expect(mockStatus).toHaveBeenCalledWith('Total', '3');
|
||||
expect(mockStatus).toHaveBeenCalledWith('Success', '2', undefined);
|
||||
expect(mockStatus).toHaveBeenCalledWith('Failed', '1', 'red');
|
||||
});
|
||||
|
||||
it('should pass abortSignal and taskPrefix to executePiece in parallel mode', async () => {
|
||||
// Given: One task in parallel mode
|
||||
const task1 = createTask('parallel-task');
|
||||
|
||||
mockExecutePiece.mockResolvedValue({ success: true });
|
||||
|
||||
mockClaimNextTasks
|
||||
.mockReturnValueOnce([task1])
|
||||
.mockReturnValueOnce([]);
|
||||
|
||||
// When
|
||||
await runAllTasks('/project');
|
||||
|
||||
// Then: executePiece received abortSignal and taskPrefix options
|
||||
expect(mockExecutePiece).toHaveBeenCalledTimes(1);
|
||||
const callArgs = mockExecutePiece.mock.calls[0];
|
||||
const pieceOptions = callArgs?.[3]; // 4th argument is options
|
||||
expect(pieceOptions).toHaveProperty('abortSignal');
|
||||
expect(pieceOptions?.abortSignal).toBeInstanceOf(AbortSignal);
|
||||
expect(pieceOptions).toHaveProperty('taskPrefix', 'parallel-task');
|
||||
});
|
||||
|
||||
it('should not pass abortSignal or taskPrefix in sequential mode', async () => {
|
||||
// Given: Sequential mode
|
||||
mockLoadGlobalConfig.mockReturnValue({
|
||||
language: 'en',
|
||||
defaultPiece: 'default',
|
||||
logLevel: 'info',
|
||||
concurrency: 1,
|
||||
});
|
||||
|
||||
const task1 = createTask('sequential-task');
|
||||
mockExecutePiece.mockResolvedValue({ success: true });
|
||||
mockLoadPieceByIdentifier.mockReturnValue(fakePieceConfig as never);
|
||||
|
||||
mockClaimNextTasks
|
||||
.mockReturnValueOnce([task1])
|
||||
.mockReturnValueOnce([]);
|
||||
|
||||
// When
|
||||
await runAllTasks('/project');
|
||||
|
||||
// Then: executePiece should not have abortSignal or taskPrefix
|
||||
expect(mockExecutePiece).toHaveBeenCalledTimes(1);
|
||||
const callArgs = mockExecutePiece.mock.calls[0];
|
||||
const pieceOptions = callArgs?.[3];
|
||||
expect(pieceOptions?.abortSignal).toBeUndefined();
|
||||
expect(pieceOptions?.taskPrefix).toBeUndefined();
|
||||
});
|
||||
});
|
||||
});
|
||||
@ -17,6 +17,11 @@ vi.mock('../shared/ui/index.js', () => ({
|
||||
blankLine: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/prompt/index.js', () => ({
|
||||
confirm: vi.fn(),
|
||||
promptInput: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
||||
...(await importOriginal<Record<string, unknown>>()),
|
||||
createLogger: () => ({
|
||||
@ -28,11 +33,14 @@ vi.mock('../shared/utils/index.js', async (importOriginal) => ({
|
||||
|
||||
import { summarizeTaskName } from '../infra/task/summarize.js';
|
||||
import { success, info } from '../shared/ui/index.js';
|
||||
import { confirm, promptInput } from '../shared/prompt/index.js';
|
||||
import { saveTaskFile, saveTaskFromInteractive } from '../features/tasks/add/index.js';
|
||||
|
||||
const mockSummarizeTaskName = vi.mocked(summarizeTaskName);
|
||||
const mockSuccess = vi.mocked(success);
|
||||
const mockInfo = vi.mocked(info);
|
||||
const mockConfirm = vi.mocked(confirm);
|
||||
const mockPromptInput = vi.mocked(promptInput);
|
||||
|
||||
let testDir: string;
|
||||
|
||||
@ -163,16 +171,82 @@ describe('saveTaskFile', () => {
|
||||
});
|
||||
|
||||
describe('saveTaskFromInteractive', () => {
|
||||
it('should save task and display success message', async () => {
|
||||
it('should save task with worktree settings when user confirms worktree', async () => {
|
||||
// Given: user confirms worktree, accepts defaults, confirms auto-PR
|
||||
mockConfirm.mockResolvedValueOnce(true); // Create worktree? → Yes
|
||||
mockPromptInput.mockResolvedValueOnce(''); // Worktree path → auto
|
||||
mockPromptInput.mockResolvedValueOnce(''); // Branch name → auto
|
||||
mockConfirm.mockResolvedValueOnce(true); // Auto-create PR? → Yes
|
||||
|
||||
// When
|
||||
await saveTaskFromInteractive(testDir, 'Task content');
|
||||
|
||||
// Then
|
||||
expect(mockSuccess).toHaveBeenCalledWith('Task created: test-task.yaml');
|
||||
expect(mockInfo).toHaveBeenCalledWith(expect.stringContaining('Path:'));
|
||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
||||
const files = fs.readdirSync(tasksDir);
|
||||
const content = fs.readFileSync(path.join(tasksDir, files[0]!), 'utf-8');
|
||||
expect(content).toContain('worktree: true');
|
||||
expect(content).toContain('auto_pr: true');
|
||||
});
|
||||
|
||||
it('should save task without worktree settings when user declines worktree', async () => {
|
||||
// Given: user declines worktree
|
||||
mockConfirm.mockResolvedValueOnce(false); // Create worktree? → No
|
||||
|
||||
// When
|
||||
await saveTaskFromInteractive(testDir, 'Task content');
|
||||
|
||||
// Then
|
||||
expect(mockSuccess).toHaveBeenCalledWith('Task created: test-task.yaml');
|
||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
||||
const files = fs.readdirSync(tasksDir);
|
||||
const content = fs.readFileSync(path.join(tasksDir, files[0]!), 'utf-8');
|
||||
expect(content).not.toContain('worktree:');
|
||||
expect(content).not.toContain('branch:');
|
||||
expect(content).not.toContain('auto_pr:');
|
||||
});
|
||||
|
||||
it('should save custom worktree path and branch when specified', async () => {
|
||||
// Given: user specifies custom path and branch
|
||||
mockConfirm.mockResolvedValueOnce(true); // Create worktree? → Yes
|
||||
mockPromptInput.mockResolvedValueOnce('/custom/path'); // Worktree path
|
||||
mockPromptInput.mockResolvedValueOnce('feat/branch'); // Branch name
|
||||
mockConfirm.mockResolvedValueOnce(false); // Auto-create PR? → No
|
||||
|
||||
// When
|
||||
await saveTaskFromInteractive(testDir, 'Task content');
|
||||
|
||||
// Then
|
||||
const tasksDir = path.join(testDir, '.takt', 'tasks');
|
||||
const files = fs.readdirSync(tasksDir);
|
||||
const content = fs.readFileSync(path.join(tasksDir, files[0]!), 'utf-8');
|
||||
expect(content).toContain('worktree: /custom/path');
|
||||
expect(content).toContain('branch: feat/branch');
|
||||
expect(content).toContain('auto_pr: false');
|
||||
});
|
||||
|
||||
it('should display worktree/branch/auto-PR info when settings are provided', async () => {
|
||||
// Given
|
||||
mockConfirm.mockResolvedValueOnce(true); // Create worktree? → Yes
|
||||
mockPromptInput.mockResolvedValueOnce('/my/path'); // Worktree path
|
||||
mockPromptInput.mockResolvedValueOnce('my-branch'); // Branch name
|
||||
mockConfirm.mockResolvedValueOnce(true); // Auto-create PR? → Yes
|
||||
|
||||
// When
|
||||
await saveTaskFromInteractive(testDir, 'Task content');
|
||||
|
||||
// Then
|
||||
expect(mockInfo).toHaveBeenCalledWith(' Worktree: /my/path');
|
||||
expect(mockInfo).toHaveBeenCalledWith(' Branch: my-branch');
|
||||
expect(mockInfo).toHaveBeenCalledWith(' Auto-PR: yes');
|
||||
});
|
||||
|
||||
it('should display piece info when specified', async () => {
|
||||
// Given
|
||||
mockConfirm.mockResolvedValueOnce(false); // Create worktree? → No
|
||||
|
||||
// When
|
||||
await saveTaskFromInteractive(testDir, 'Task content', 'review');
|
||||
|
||||
@ -181,6 +255,9 @@ describe('saveTaskFromInteractive', () => {
|
||||
});
|
||||
|
||||
it('should include piece in saved YAML', async () => {
|
||||
// Given
|
||||
mockConfirm.mockResolvedValueOnce(false); // Create worktree? → No
|
||||
|
||||
// When
|
||||
await saveTaskFromInteractive(testDir, 'Task content', 'custom');
|
||||
|
||||
@ -193,6 +270,9 @@ describe('saveTaskFromInteractive', () => {
|
||||
});
|
||||
|
||||
it('should not display piece info when not specified', async () => {
|
||||
// Given
|
||||
mockConfirm.mockResolvedValueOnce(false); // Create worktree? → No
|
||||
|
||||
// When
|
||||
await saveTaskFromInteractive(testDir, 'Task content');
|
||||
|
||||
@ -202,4 +282,18 @@ describe('saveTaskFromInteractive', () => {
|
||||
);
|
||||
expect(pieceInfoCalls.length).toBe(0);
|
||||
});
|
||||
|
||||
it('should display auto worktree info when no custom path', async () => {
|
||||
// Given
|
||||
mockConfirm.mockResolvedValueOnce(true); // Create worktree? → Yes
|
||||
mockPromptInput.mockResolvedValueOnce(''); // Worktree path → auto
|
||||
mockPromptInput.mockResolvedValueOnce(''); // Branch name → auto
|
||||
mockConfirm.mockResolvedValueOnce(true); // Auto-create PR? → Yes
|
||||
|
||||
// When
|
||||
await saveTaskFromInteractive(testDir, 'Task content');
|
||||
|
||||
// Then
|
||||
expect(mockInfo).toHaveBeenCalledWith(' Worktree: auto');
|
||||
});
|
||||
});
|
||||
|
||||
@ -23,6 +23,7 @@ vi.mock('../infra/task/index.js', () => ({
|
||||
createSharedClone: vi.fn(),
|
||||
autoCommitAndPush: vi.fn(),
|
||||
summarizeTaskName: vi.fn(),
|
||||
getCurrentBranch: vi.fn(() => 'main'),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/ui/index.js', () => ({
|
||||
|
||||
53
src/__tests__/session-key.test.ts
Normal file
53
src/__tests__/session-key.test.ts
Normal file
@ -0,0 +1,53 @@
|
||||
/**
|
||||
* Tests for session key generation
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import { buildSessionKey } from '../core/piece/session-key.js';
|
||||
import type { PieceMovement } from '../core/models/types.js';
|
||||
|
||||
function createMovement(overrides: Partial<PieceMovement> = {}): PieceMovement {
|
||||
return {
|
||||
name: 'test-movement',
|
||||
personaDisplayName: 'test',
|
||||
edit: false,
|
||||
instructionTemplate: '',
|
||||
passPreviousResponse: true,
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
describe('buildSessionKey', () => {
|
||||
it('should use persona as base key when persona is set', () => {
|
||||
const step = createMovement({ persona: 'coder', name: 'implement' });
|
||||
expect(buildSessionKey(step)).toBe('coder');
|
||||
});
|
||||
|
||||
it('should use name as base key when persona is not set', () => {
|
||||
const step = createMovement({ persona: undefined, name: 'plan' });
|
||||
expect(buildSessionKey(step)).toBe('plan');
|
||||
});
|
||||
|
||||
it('should append provider when provider is specified', () => {
|
||||
const step = createMovement({ persona: 'coder', provider: 'claude' });
|
||||
expect(buildSessionKey(step)).toBe('coder:claude');
|
||||
});
|
||||
|
||||
it('should use name with provider when persona is not set', () => {
|
||||
const step = createMovement({ persona: undefined, name: 'review', provider: 'codex' });
|
||||
expect(buildSessionKey(step)).toBe('review:codex');
|
||||
});
|
||||
|
||||
it('should produce different keys for same persona with different providers', () => {
|
||||
const claudeStep = createMovement({ persona: 'coder', provider: 'claude', name: 'claude-eye' });
|
||||
const codexStep = createMovement({ persona: 'coder', provider: 'codex', name: 'codex-eye' });
|
||||
expect(buildSessionKey(claudeStep)).not.toBe(buildSessionKey(codexStep));
|
||||
expect(buildSessionKey(claudeStep)).toBe('coder:claude');
|
||||
expect(buildSessionKey(codexStep)).toBe('coder:codex');
|
||||
});
|
||||
|
||||
it('should not append provider when provider is undefined', () => {
|
||||
const step = createMovement({ persona: 'coder', provider: undefined });
|
||||
expect(buildSessionKey(step)).toBe('coder');
|
||||
});
|
||||
});
|
||||
@ -71,7 +71,7 @@ describe('preventSleep', () => {
|
||||
|
||||
expect(spawn).toHaveBeenCalledWith(
|
||||
'/usr/bin/caffeinate',
|
||||
['-i', '-w', String(process.pid)],
|
||||
['-di', '-w', String(process.pid)],
|
||||
{ stdio: 'ignore', detached: true }
|
||||
);
|
||||
expect(mockChild.unref).toHaveBeenCalled();
|
||||
|
||||
@ -144,6 +144,117 @@ describe('TaskRunner', () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe('claimNextTasks', () => {
|
||||
it('should return empty array when no tasks', () => {
|
||||
const tasks = runner.claimNextTasks(3);
|
||||
expect(tasks).toEqual([]);
|
||||
});
|
||||
|
||||
it('should return tasks up to the requested count', () => {
|
||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
||||
mkdirSync(tasksDir, { recursive: true });
|
||||
writeFileSync(join(tasksDir, 'a-task.md'), 'A');
|
||||
writeFileSync(join(tasksDir, 'b-task.md'), 'B');
|
||||
writeFileSync(join(tasksDir, 'c-task.md'), 'C');
|
||||
|
||||
const tasks = runner.claimNextTasks(2);
|
||||
expect(tasks).toHaveLength(2);
|
||||
expect(tasks[0]?.name).toBe('a-task');
|
||||
expect(tasks[1]?.name).toBe('b-task');
|
||||
});
|
||||
|
||||
it('should not return already claimed tasks on subsequent calls', () => {
|
||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
||||
mkdirSync(tasksDir, { recursive: true });
|
||||
writeFileSync(join(tasksDir, 'a-task.md'), 'A');
|
||||
writeFileSync(join(tasksDir, 'b-task.md'), 'B');
|
||||
writeFileSync(join(tasksDir, 'c-task.md'), 'C');
|
||||
|
||||
// Given: first call claims a-task
|
||||
const first = runner.claimNextTasks(1);
|
||||
expect(first).toHaveLength(1);
|
||||
expect(first[0]?.name).toBe('a-task');
|
||||
|
||||
// When: second call should skip a-task
|
||||
const second = runner.claimNextTasks(1);
|
||||
expect(second).toHaveLength(1);
|
||||
expect(second[0]?.name).toBe('b-task');
|
||||
|
||||
// When: third call should skip a-task and b-task
|
||||
const third = runner.claimNextTasks(1);
|
||||
expect(third).toHaveLength(1);
|
||||
expect(third[0]?.name).toBe('c-task');
|
||||
|
||||
// When: fourth call should return empty (all claimed)
|
||||
const fourth = runner.claimNextTasks(1);
|
||||
expect(fourth).toEqual([]);
|
||||
});
|
||||
|
||||
it('should release claim after completeTask', () => {
|
||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
||||
mkdirSync(tasksDir, { recursive: true });
|
||||
writeFileSync(join(tasksDir, 'task-a.md'), 'Task A content');
|
||||
|
||||
// Given: claim the task
|
||||
const claimed = runner.claimNextTasks(1);
|
||||
expect(claimed).toHaveLength(1);
|
||||
|
||||
// When: complete the task (file is moved away)
|
||||
runner.completeTask({
|
||||
task: claimed[0]!,
|
||||
success: true,
|
||||
response: 'Done',
|
||||
executionLog: [],
|
||||
startedAt: '2024-01-01T00:00:00.000Z',
|
||||
completedAt: '2024-01-01T00:01:00.000Z',
|
||||
});
|
||||
|
||||
// Then: claim set no longer blocks (but file is moved, so no tasks anyway)
|
||||
const next = runner.claimNextTasks(1);
|
||||
expect(next).toEqual([]);
|
||||
});
|
||||
|
||||
it('should release claim after failTask', () => {
|
||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
||||
mkdirSync(tasksDir, { recursive: true });
|
||||
writeFileSync(join(tasksDir, 'task-a.md'), 'Task A content');
|
||||
|
||||
// Given: claim the task
|
||||
const claimed = runner.claimNextTasks(1);
|
||||
expect(claimed).toHaveLength(1);
|
||||
|
||||
// When: fail the task (file is moved away)
|
||||
runner.failTask({
|
||||
task: claimed[0]!,
|
||||
success: false,
|
||||
response: 'Error',
|
||||
executionLog: [],
|
||||
startedAt: '2024-01-01T00:00:00.000Z',
|
||||
completedAt: '2024-01-01T00:01:00.000Z',
|
||||
});
|
||||
|
||||
// Then: claim set no longer blocks
|
||||
const next = runner.claimNextTasks(1);
|
||||
expect(next).toEqual([]);
|
||||
});
|
||||
|
||||
it('should not affect getNextTask (unclaimed access)', () => {
|
||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
||||
mkdirSync(tasksDir, { recursive: true });
|
||||
writeFileSync(join(tasksDir, 'a-task.md'), 'A');
|
||||
writeFileSync(join(tasksDir, 'b-task.md'), 'B');
|
||||
|
||||
// Given: claim a-task via claimNextTasks
|
||||
runner.claimNextTasks(1);
|
||||
|
||||
// When: getNextTask is called (no claim filtering)
|
||||
const task = runner.getNextTask();
|
||||
|
||||
// Then: getNextTask still returns first task (including claimed)
|
||||
expect(task?.name).toBe('a-task');
|
||||
});
|
||||
});
|
||||
|
||||
describe('completeTask', () => {
|
||||
it('should move task to completed directory', () => {
|
||||
const tasksDir = join(testDir, '.takt', 'tasks');
|
||||
|
||||
@ -25,6 +25,11 @@ vi.mock('../infra/task/clone.js', async (importOriginal) => ({
|
||||
removeClone: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../infra/task/git.js', async (importOriginal) => ({
|
||||
...(await importOriginal<Record<string, unknown>>()),
|
||||
getCurrentBranch: vi.fn(() => 'main'),
|
||||
}));
|
||||
|
||||
vi.mock('../infra/task/autoCommit.js', async (importOriginal) => ({
|
||||
...(await importOriginal<Record<string, unknown>>()),
|
||||
autoCommitAndPush: vi.fn(),
|
||||
@ -68,12 +73,14 @@ vi.mock('../shared/constants.js', () => ({
|
||||
}));
|
||||
|
||||
import { createSharedClone } from '../infra/task/clone.js';
|
||||
import { getCurrentBranch } from '../infra/task/git.js';
|
||||
import { summarizeTaskName } from '../infra/task/summarize.js';
|
||||
import { info } from '../shared/ui/index.js';
|
||||
import { resolveTaskExecution } from '../features/tasks/index.js';
|
||||
import type { TaskInfo } from '../infra/task/index.js';
|
||||
|
||||
const mockCreateSharedClone = vi.mocked(createSharedClone);
|
||||
const mockGetCurrentBranch = vi.mocked(getCurrentBranch);
|
||||
const mockSummarizeTaskName = vi.mocked(summarizeTaskName);
|
||||
const mockInfo = vi.mocked(info);
|
||||
|
||||
@ -150,11 +157,13 @@ describe('resolveTaskExecution', () => {
|
||||
branch: undefined,
|
||||
taskSlug: 'add-auth',
|
||||
});
|
||||
expect(mockGetCurrentBranch).toHaveBeenCalledWith('/project');
|
||||
expect(result).toEqual({
|
||||
execCwd: '/project/../20260128T0504-add-auth',
|
||||
execPiece: 'default',
|
||||
isWorktree: true,
|
||||
branch: 'takt/20260128T0504-add-auth',
|
||||
baseBranch: 'main',
|
||||
});
|
||||
});
|
||||
|
||||
@ -396,4 +405,87 @@ describe('resolveTaskExecution', () => {
|
||||
// Then
|
||||
expect(result.autoPr).toBe(false);
|
||||
});
|
||||
|
||||
it('should capture baseBranch from getCurrentBranch when worktree is used', async () => {
|
||||
// Given: Task with worktree, on 'develop' branch
|
||||
mockGetCurrentBranch.mockReturnValue('develop');
|
||||
const task: TaskInfo = {
|
||||
name: 'task-on-develop',
|
||||
content: 'Task on develop branch',
|
||||
filePath: '/tasks/task.yaml',
|
||||
data: {
|
||||
task: 'Task on develop branch',
|
||||
worktree: true,
|
||||
},
|
||||
};
|
||||
|
||||
mockSummarizeTaskName.mockResolvedValue('task-develop');
|
||||
mockCreateSharedClone.mockReturnValue({
|
||||
path: '/project/../task-develop',
|
||||
branch: 'takt/task-develop',
|
||||
});
|
||||
|
||||
// When
|
||||
const result = await resolveTaskExecution(task, '/project', 'default');
|
||||
|
||||
// Then
|
||||
expect(mockGetCurrentBranch).toHaveBeenCalledWith('/project');
|
||||
expect(result.baseBranch).toBe('develop');
|
||||
});
|
||||
|
||||
it('should not set baseBranch when worktree is not used', async () => {
|
||||
// Given: Task without worktree
|
||||
const task: TaskInfo = {
|
||||
name: 'task-no-worktree',
|
||||
content: 'Task without worktree',
|
||||
filePath: '/tasks/task.yaml',
|
||||
data: {
|
||||
task: 'Task without worktree',
|
||||
},
|
||||
};
|
||||
|
||||
// When
|
||||
const result = await resolveTaskExecution(task, '/project', 'default');
|
||||
|
||||
// Then
|
||||
expect(mockGetCurrentBranch).not.toHaveBeenCalled();
|
||||
expect(result.baseBranch).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should return issueNumber from task data when specified', async () => {
|
||||
// Given: Task with issue number
|
||||
const task: TaskInfo = {
|
||||
name: 'task-with-issue',
|
||||
content: 'Fix authentication bug',
|
||||
filePath: '/tasks/task.yaml',
|
||||
data: {
|
||||
task: 'Fix authentication bug',
|
||||
issue: 131,
|
||||
},
|
||||
};
|
||||
|
||||
// When
|
||||
const result = await resolveTaskExecution(task, '/project', 'default');
|
||||
|
||||
// Then
|
||||
expect(result.issueNumber).toBe(131);
|
||||
});
|
||||
|
||||
it('should return undefined issueNumber when task data has no issue', async () => {
|
||||
// Given: Task without issue
|
||||
const task: TaskInfo = {
|
||||
name: 'task-no-issue',
|
||||
content: 'Task content',
|
||||
filePath: '/tasks/task.yaml',
|
||||
data: {
|
||||
task: 'Task content',
|
||||
},
|
||||
};
|
||||
|
||||
// When
|
||||
const result = await resolveTaskExecution(task, '/project', 'default');
|
||||
|
||||
// Then
|
||||
expect(result.issueNumber).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
230
src/__tests__/workerPool.test.ts
Normal file
230
src/__tests__/workerPool.test.ts
Normal file
@ -0,0 +1,230 @@
|
||||
/**
|
||||
* Unit tests for runWithWorkerPool
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import type { TaskInfo } from '../infra/task/index.js';
|
||||
|
||||
vi.mock('../shared/ui/index.js', () => ({
|
||||
header: vi.fn(),
|
||||
info: vi.fn(),
|
||||
warn: vi.fn(),
|
||||
error: vi.fn(),
|
||||
success: vi.fn(),
|
||||
status: vi.fn(),
|
||||
blankLine: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../shared/exitCodes.js', () => ({
|
||||
EXIT_SIGINT: 130,
|
||||
}));
|
||||
|
||||
vi.mock('../shared/i18n/index.js', () => ({
|
||||
getLabel: vi.fn((key: string) => key),
|
||||
}));
|
||||
|
||||
const mockExecuteAndCompleteTask = vi.fn();
|
||||
|
||||
vi.mock('../features/tasks/execute/taskExecution.js', () => ({
|
||||
executeAndCompleteTask: (...args: unknown[]) => mockExecuteAndCompleteTask(...args),
|
||||
}));
|
||||
|
||||
import { runWithWorkerPool } from '../features/tasks/execute/parallelExecution.js';
|
||||
import { info } from '../shared/ui/index.js';
|
||||
|
||||
const mockInfo = vi.mocked(info);
|
||||
|
||||
function createTask(name: string): TaskInfo {
|
||||
return {
|
||||
name,
|
||||
content: `Task: ${name}`,
|
||||
filePath: `/tasks/${name}.yaml`,
|
||||
};
|
||||
}
|
||||
|
||||
function createMockTaskRunner(taskBatches: TaskInfo[][]) {
|
||||
let batchIndex = 0;
|
||||
return {
|
||||
getNextTask: vi.fn(() => null),
|
||||
claimNextTasks: vi.fn(() => {
|
||||
const batch = taskBatches[batchIndex] ?? [];
|
||||
batchIndex++;
|
||||
return batch;
|
||||
}),
|
||||
completeTask: vi.fn(),
|
||||
failTask: vi.fn(),
|
||||
};
|
||||
}
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
mockExecuteAndCompleteTask.mockResolvedValue(true);
|
||||
});
|
||||
|
||||
describe('runWithWorkerPool', () => {
|
||||
it('should return correct counts for all successful tasks', async () => {
|
||||
// Given
|
||||
const tasks = [createTask('a'), createTask('b')];
|
||||
const runner = createMockTaskRunner([]);
|
||||
|
||||
// When
|
||||
const result = await runWithWorkerPool(runner as never, tasks, 2, '/cwd', 'default');
|
||||
|
||||
// Then
|
||||
expect(result).toEqual({ success: 2, fail: 0 });
|
||||
});
|
||||
|
||||
it('should return correct counts when some tasks fail', async () => {
|
||||
// Given
|
||||
const tasks = [createTask('pass'), createTask('fail'), createTask('pass2')];
|
||||
let callIdx = 0;
|
||||
mockExecuteAndCompleteTask.mockImplementation(() => {
|
||||
callIdx++;
|
||||
return Promise.resolve(callIdx !== 2);
|
||||
});
|
||||
const runner = createMockTaskRunner([]);
|
||||
|
||||
// When
|
||||
const result = await runWithWorkerPool(runner as never, tasks, 3, '/cwd', 'default');
|
||||
|
||||
// Then
|
||||
expect(result).toEqual({ success: 2, fail: 1 });
|
||||
});
|
||||
|
||||
it('should display task name for each task', async () => {
|
||||
// Given
|
||||
const tasks = [createTask('alpha'), createTask('beta')];
|
||||
const runner = createMockTaskRunner([]);
|
||||
|
||||
// When
|
||||
await runWithWorkerPool(runner as never, tasks, 2, '/cwd', 'default');
|
||||
|
||||
// Then
|
||||
expect(mockInfo).toHaveBeenCalledWith('=== Task: alpha ===');
|
||||
expect(mockInfo).toHaveBeenCalledWith('=== Task: beta ===');
|
||||
});
|
||||
|
||||
it('should pass taskPrefix for parallel execution (concurrency > 1)', async () => {
|
||||
// Given
|
||||
const tasks = [createTask('my-task')];
|
||||
const runner = createMockTaskRunner([]);
|
||||
|
||||
// When
|
||||
await runWithWorkerPool(runner as never, tasks, 2, '/cwd', 'default');
|
||||
|
||||
// Then
|
||||
expect(mockExecuteAndCompleteTask).toHaveBeenCalledTimes(1);
|
||||
const parallelOpts = mockExecuteAndCompleteTask.mock.calls[0]?.[5];
|
||||
expect(parallelOpts).toEqual({
|
||||
abortSignal: expect.any(AbortSignal),
|
||||
taskPrefix: 'my-task',
|
||||
});
|
||||
});
|
||||
|
||||
it('should not pass taskPrefix or abortSignal for sequential execution (concurrency = 1)', async () => {
|
||||
// Given
|
||||
const tasks = [createTask('seq-task')];
|
||||
const runner = createMockTaskRunner([]);
|
||||
|
||||
// When
|
||||
await runWithWorkerPool(runner as never, tasks, 1, '/cwd', 'default');
|
||||
|
||||
// Then
|
||||
expect(mockExecuteAndCompleteTask).toHaveBeenCalledTimes(1);
|
||||
const parallelOpts = mockExecuteAndCompleteTask.mock.calls[0]?.[5];
|
||||
expect(parallelOpts).toEqual({
|
||||
abortSignal: undefined,
|
||||
taskPrefix: undefined,
|
||||
});
|
||||
});
|
||||
|
||||
it('should fetch more tasks when slots become available', async () => {
|
||||
// Given: 1 initial task, runner provides 1 more after
|
||||
const task1 = createTask('first');
|
||||
const task2 = createTask('second');
|
||||
const runner = createMockTaskRunner([[task2]]);
|
||||
|
||||
// When
|
||||
await runWithWorkerPool(runner as never, [task1], 2, '/cwd', 'default');
|
||||
|
||||
// Then
|
||||
expect(mockExecuteAndCompleteTask).toHaveBeenCalledTimes(2);
|
||||
expect(runner.claimNextTasks).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should respect concurrency limit', async () => {
|
||||
// Given: 4 tasks, concurrency=2
|
||||
const tasks = Array.from({ length: 4 }, (_, i) => createTask(`task-${i}`));
|
||||
|
||||
let activeCount = 0;
|
||||
let maxActive = 0;
|
||||
|
||||
mockExecuteAndCompleteTask.mockImplementation(() => {
|
||||
activeCount++;
|
||||
maxActive = Math.max(maxActive, activeCount);
|
||||
return new Promise((resolve) => {
|
||||
setTimeout(() => {
|
||||
activeCount--;
|
||||
resolve(true);
|
||||
}, 20);
|
||||
});
|
||||
});
|
||||
|
||||
const runner = createMockTaskRunner([]);
|
||||
|
||||
// When
|
||||
await runWithWorkerPool(runner as never, tasks, 2, '/cwd', 'default');
|
||||
|
||||
// Then: Never exceeded concurrency of 2
|
||||
expect(maxActive).toBeLessThanOrEqual(2);
|
||||
expect(mockExecuteAndCompleteTask).toHaveBeenCalledTimes(4);
|
||||
});
|
||||
|
||||
it('should pass abortSignal to all parallel tasks', async () => {
|
||||
// Given: Multiple tasks in parallel mode
|
||||
const tasks = [createTask('task-1'), createTask('task-2'), createTask('task-3')];
|
||||
const runner = createMockTaskRunner([]);
|
||||
|
||||
const receivedSignals: (AbortSignal | undefined)[] = [];
|
||||
mockExecuteAndCompleteTask.mockImplementation((_task, _runner, _cwd, _piece, _opts, parallelOpts) => {
|
||||
receivedSignals.push(parallelOpts?.abortSignal);
|
||||
return Promise.resolve(true);
|
||||
});
|
||||
|
||||
// When
|
||||
await runWithWorkerPool(runner as never, tasks, 3, '/cwd', 'default');
|
||||
|
||||
// Then: All tasks received the same AbortSignal
|
||||
expect(receivedSignals).toHaveLength(3);
|
||||
const firstSignal = receivedSignals[0];
|
||||
expect(firstSignal).toBeInstanceOf(AbortSignal);
|
||||
for (const signal of receivedSignals) {
|
||||
expect(signal).toBe(firstSignal);
|
||||
}
|
||||
});
|
||||
|
||||
it('should handle empty initial tasks', async () => {
|
||||
// Given: No tasks
|
||||
const runner = createMockTaskRunner([]);
|
||||
|
||||
// When
|
||||
const result = await runWithWorkerPool(runner as never, [], 2, '/cwd', 'default');
|
||||
|
||||
// Then
|
||||
expect(result).toEqual({ success: 0, fail: 0 });
|
||||
expect(mockExecuteAndCompleteTask).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should handle task promise rejection gracefully', async () => {
|
||||
// Given: Task that throws
|
||||
const tasks = [createTask('throws')];
|
||||
mockExecuteAndCompleteTask.mockRejectedValue(new Error('boom'));
|
||||
const runner = createMockTaskRunner([]);
|
||||
|
||||
// When
|
||||
const result = await runWithWorkerPool(runner as never, tasks, 1, '/cwd', 'default');
|
||||
|
||||
// Then: Treated as failure
|
||||
expect(result).toEqual({ success: 0, fail: 1 });
|
||||
});
|
||||
});
|
||||
@ -102,6 +102,7 @@ export class AgentRunner {
|
||||
cwd: options.cwd,
|
||||
sessionId: options.sessionId,
|
||||
allowedTools: options.allowedTools ?? agentConfig?.allowedTools,
|
||||
mcpServers: options.mcpServers,
|
||||
maxTurns: options.maxTurns,
|
||||
model: AgentRunner.resolveModel(resolvedProvider, options, agentConfig),
|
||||
permissionMode: options.permissionMode,
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
*/
|
||||
|
||||
import type { StreamCallback, PermissionHandler, AskUserQuestionHandler } from '../infra/claude/index.js';
|
||||
import type { PermissionMode, Language } from '../core/models/index.js';
|
||||
import type { PermissionMode, Language, McpServerConfig } from '../core/models/index.js';
|
||||
|
||||
export type { StreamCallback };
|
||||
|
||||
@ -17,6 +17,8 @@ export interface RunAgentOptions {
|
||||
personaPath?: string;
|
||||
/** Allowed tools for this agent run */
|
||||
allowedTools?: string[];
|
||||
/** MCP servers for this agent run */
|
||||
mcpServers?: Record<string, McpServerConfig>;
|
||||
/** Maximum number of agentic turns */
|
||||
maxTurns?: number;
|
||||
/** Permission mode for tool execution (from piece step) */
|
||||
|
||||
@ -1,14 +1,15 @@
|
||||
/**
|
||||
* CLI subcommand definitions
|
||||
*
|
||||
* Registers all named subcommands (run, watch, add, list, switch, clear, eject, config, prompt).
|
||||
* Registers all named subcommands (run, watch, add, list, switch, clear, eject, config, prompt, catalog).
|
||||
*/
|
||||
|
||||
import { clearPersonaSessions, getCurrentPiece } from '../../infra/config/index.js';
|
||||
import { success } from '../../shared/ui/index.js';
|
||||
import { runAllTasks, addTask, watchTasks, listTasks } from '../../features/tasks/index.js';
|
||||
import { switchPiece, switchConfig, ejectBuiltin, resetCategoriesToDefault, deploySkill } from '../../features/config/index.js';
|
||||
import { switchPiece, switchConfig, ejectBuiltin, ejectFacet, parseFacetType, VALID_FACET_TYPES, resetCategoriesToDefault, deploySkill } from '../../features/config/index.js';
|
||||
import { previewPrompts } from '../../features/prompt/index.js';
|
||||
import { showCatalog } from '../../features/catalog/index.js';
|
||||
import { program, resolvedCwd } from './program.js';
|
||||
import { resolveAgentOverrides } from './helpers.js';
|
||||
|
||||
@ -75,11 +76,24 @@ program
|
||||
|
||||
program
|
||||
.command('eject')
|
||||
.description('Copy builtin piece/agents for customization (default: project .takt/)')
|
||||
.argument('[name]', 'Specific builtin to eject')
|
||||
.description('Copy builtin piece or facet for customization (default: project .takt/)')
|
||||
.argument('[typeOrName]', `Piece name, or facet type (${VALID_FACET_TYPES.join(', ')})`)
|
||||
.argument('[facetName]', 'Facet name (when first arg is a facet type)')
|
||||
.option('--global', 'Eject to ~/.takt/ instead of project .takt/')
|
||||
.action(async (name: string | undefined, opts: { global?: boolean }) => {
|
||||
await ejectBuiltin(name, { global: opts.global, projectDir: resolvedCwd });
|
||||
.action(async (typeOrName: string | undefined, facetName: string | undefined, opts: { global?: boolean }) => {
|
||||
const ejectOptions = { global: opts.global, projectDir: resolvedCwd };
|
||||
|
||||
if (typeOrName && facetName) {
|
||||
const facetType = parseFacetType(typeOrName);
|
||||
if (!facetType) {
|
||||
console.error(`Invalid facet type: ${typeOrName}. Valid types: ${VALID_FACET_TYPES.join(', ')}`);
|
||||
process.exitCode = 1;
|
||||
return;
|
||||
}
|
||||
await ejectFacet(facetType, facetName, ejectOptions);
|
||||
} else {
|
||||
await ejectBuiltin(typeOrName, ejectOptions);
|
||||
}
|
||||
});
|
||||
|
||||
program
|
||||
@ -115,3 +129,11 @@ program
|
||||
.action(async () => {
|
||||
await deploySkill();
|
||||
});
|
||||
|
||||
program
|
||||
.command('catalog')
|
||||
.description('List available facets (personas, policies, knowledge, instructions, output-contracts)')
|
||||
.argument('[type]', 'Facet type to list')
|
||||
.action((type?: string) => {
|
||||
showCatalog(resolvedCwd, type);
|
||||
});
|
||||
|
||||
@ -7,15 +7,57 @@
|
||||
|
||||
import { info, error } from '../../shared/ui/index.js';
|
||||
import { getErrorMessage } from '../../shared/utils/index.js';
|
||||
import { fetchIssue, formatIssueAsTask, checkGhCli, parseIssueNumbers } from '../../infra/github/index.js';
|
||||
import { fetchIssue, formatIssueAsTask, checkGhCli, parseIssueNumbers, type GitHubIssue } from '../../infra/github/index.js';
|
||||
import { selectAndExecuteTask, determinePiece, saveTaskFromInteractive, createIssueFromTask, type SelectAndExecuteOptions } from '../../features/tasks/index.js';
|
||||
import { executePipeline } from '../../features/pipeline/index.js';
|
||||
import { interactiveMode } from '../../features/interactive/index.js';
|
||||
import { getPieceDescription } from '../../infra/config/index.js';
|
||||
import { getPieceDescription, loadGlobalConfig } from '../../infra/config/index.js';
|
||||
import { DEFAULT_PIECE_NAME } from '../../shared/constants.js';
|
||||
import { program, resolvedCwd, pipelineMode } from './program.js';
|
||||
import { resolveAgentOverrides, parseCreateWorktreeOption, isDirectTask } from './helpers.js';
|
||||
|
||||
/**
|
||||
* Resolve issue references from CLI input.
|
||||
*
|
||||
* Handles two sources:
|
||||
* - --issue N option (numeric issue number)
|
||||
* - Positional argument containing issue references (#N or "#1 #2")
|
||||
*
|
||||
* Returns resolved issues and the formatted task text for interactive mode.
|
||||
* Throws on gh CLI unavailability or fetch failure.
|
||||
*/
|
||||
function resolveIssueInput(
|
||||
issueOption: number | undefined,
|
||||
task: string | undefined,
|
||||
): { issues: GitHubIssue[]; initialInput: string } | null {
|
||||
if (issueOption) {
|
||||
info('Fetching GitHub Issue...');
|
||||
const ghStatus = checkGhCli();
|
||||
if (!ghStatus.available) {
|
||||
throw new Error(ghStatus.error);
|
||||
}
|
||||
const issue = fetchIssue(issueOption);
|
||||
return { issues: [issue], initialInput: formatIssueAsTask(issue) };
|
||||
}
|
||||
|
||||
if (task && isDirectTask(task)) {
|
||||
info('Fetching GitHub Issue...');
|
||||
const ghStatus = checkGhCli();
|
||||
if (!ghStatus.available) {
|
||||
throw new Error(ghStatus.error);
|
||||
}
|
||||
const tokens = task.trim().split(/\s+/);
|
||||
const issueNumbers = parseIssueNumbers(tokens);
|
||||
if (issueNumbers.length === 0) {
|
||||
throw new Error(`Invalid issue reference: ${task}`);
|
||||
}
|
||||
const issues = issueNumbers.map((n) => fetchIssue(n));
|
||||
return { issues, initialInput: issues.map(formatIssueAsTask).join('\n\n---\n\n') };
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute default action: handle task execution, pipeline mode, or interactive mode.
|
||||
* Exported for use in slash-command fallback logic.
|
||||
@ -54,66 +96,38 @@ export async function executeDefaultAction(task?: string): Promise<void> {
|
||||
|
||||
// --- Normal (interactive) mode ---
|
||||
|
||||
// Resolve --task option to task text
|
||||
// Resolve --task option to task text (direct execution, no interactive mode)
|
||||
const taskFromOption = opts.task as string | undefined;
|
||||
if (taskFromOption) {
|
||||
await selectAndExecuteTask(resolvedCwd, taskFromOption, selectOptions, agentOverrides);
|
||||
return;
|
||||
}
|
||||
|
||||
// Resolve --issue N to task text (same as #N)
|
||||
const issueFromOption = opts.issue as number | undefined;
|
||||
if (issueFromOption) {
|
||||
// Resolve issue references (--issue N or #N positional arg) before interactive mode
|
||||
let initialInput: string | undefined = task;
|
||||
|
||||
try {
|
||||
const ghStatus = checkGhCli();
|
||||
if (!ghStatus.available) {
|
||||
throw new Error(ghStatus.error);
|
||||
const issueResult = resolveIssueInput(opts.issue as number | undefined, task);
|
||||
if (issueResult) {
|
||||
selectOptions.issues = issueResult.issues;
|
||||
initialInput = issueResult.initialInput;
|
||||
}
|
||||
const issue = fetchIssue(issueFromOption);
|
||||
const resolvedTask = formatIssueAsTask(issue);
|
||||
selectOptions.issues = [issue];
|
||||
await selectAndExecuteTask(resolvedCwd, resolvedTask, selectOptions, agentOverrides);
|
||||
} catch (e) {
|
||||
error(getErrorMessage(e));
|
||||
process.exit(1);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (task && isDirectTask(task)) {
|
||||
// isDirectTask() returns true only for issue references (e.g., "#6" or "#1 #2")
|
||||
try {
|
||||
info('Fetching GitHub Issue...');
|
||||
const ghStatus = checkGhCli();
|
||||
if (!ghStatus.available) {
|
||||
throw new Error(ghStatus.error);
|
||||
}
|
||||
// Parse all issue numbers from task (supports "#6" and "#1 #2")
|
||||
const tokens = task.trim().split(/\s+/);
|
||||
const issueNumbers = parseIssueNumbers(tokens);
|
||||
if (issueNumbers.length === 0) {
|
||||
throw new Error(`Invalid issue reference: ${task}`);
|
||||
}
|
||||
const issues = issueNumbers.map((n) => fetchIssue(n));
|
||||
const resolvedTask = issues.map(formatIssueAsTask).join('\n\n---\n\n');
|
||||
selectOptions.issues = issues;
|
||||
await selectAndExecuteTask(resolvedCwd, resolvedTask, selectOptions, agentOverrides);
|
||||
} catch (e) {
|
||||
error(getErrorMessage(e));
|
||||
process.exit(1);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
// Non-issue inputs → interactive mode (with optional initial input)
|
||||
// All paths below go through interactive mode
|
||||
const pieceId = await determinePiece(resolvedCwd, selectOptions.piece);
|
||||
if (pieceId === null) {
|
||||
info('Cancelled');
|
||||
return;
|
||||
}
|
||||
|
||||
const pieceContext = getPieceDescription(pieceId, resolvedCwd);
|
||||
const result = await interactiveMode(resolvedCwd, task, pieceContext);
|
||||
const globalConfig = loadGlobalConfig();
|
||||
const previewCount = globalConfig.interactivePreviewMovements;
|
||||
const pieceContext = getPieceDescription(pieceId, resolvedCwd, previewCount);
|
||||
const result = await interactiveMode(resolvedCwd, initialInput, pieceContext);
|
||||
|
||||
switch (result.action) {
|
||||
case 'execute':
|
||||
|
||||
@ -65,6 +65,12 @@ export interface GlobalConfig {
|
||||
branchNameStrategy?: 'romaji' | 'ai';
|
||||
/** Prevent macOS idle sleep during takt execution using caffeinate (default: false) */
|
||||
preventSleep?: boolean;
|
||||
/** Enable notification sounds (default: true when undefined) */
|
||||
notificationSound?: boolean;
|
||||
/** Number of movement previews to inject into interactive mode (0 to disable, max 10) */
|
||||
interactivePreviewMovements?: number;
|
||||
/** Number of tasks to run concurrently in takt run (default: 1 = sequential) */
|
||||
concurrency: number;
|
||||
}
|
||||
|
||||
/** Project-level configuration */
|
||||
|
||||
@ -7,6 +7,7 @@ export type {
|
||||
OutputContractLabelPath,
|
||||
OutputContractItem,
|
||||
OutputContractEntry,
|
||||
McpServerConfig,
|
||||
AgentResponse,
|
||||
SessionState,
|
||||
PieceRule,
|
||||
|
||||
40
src/core/models/mcp-schemas.ts
Normal file
40
src/core/models/mcp-schemas.ts
Normal file
@ -0,0 +1,40 @@
|
||||
/**
|
||||
* Zod schemas for MCP (Model Context Protocol) server configuration.
|
||||
*
|
||||
* Supports three transports: stdio, SSE, and HTTP.
|
||||
* Note: Uses zod v4 syntax for SDK compatibility.
|
||||
*/
|
||||
|
||||
import { z } from 'zod/v4';
|
||||
|
||||
/** MCP server configuration for stdio transport */
|
||||
const McpStdioServerSchema = z.object({
|
||||
type: z.literal('stdio').optional(),
|
||||
command: z.string().min(1),
|
||||
args: z.array(z.string()).optional(),
|
||||
env: z.record(z.string(), z.string()).optional(),
|
||||
});
|
||||
|
||||
/** MCP server configuration for SSE transport */
|
||||
const McpSseServerSchema = z.object({
|
||||
type: z.literal('sse'),
|
||||
url: z.string().min(1),
|
||||
headers: z.record(z.string(), z.string()).optional(),
|
||||
});
|
||||
|
||||
/** MCP server configuration for HTTP transport */
|
||||
const McpHttpServerSchema = z.object({
|
||||
type: z.literal('http'),
|
||||
url: z.string().min(1),
|
||||
headers: z.record(z.string(), z.string()).optional(),
|
||||
});
|
||||
|
||||
/** MCP server configuration (union of all YAML-configurable transports) */
|
||||
export const McpServerConfigSchema = z.union([
|
||||
McpStdioServerSchema,
|
||||
McpSseServerSchema,
|
||||
McpHttpServerSchema,
|
||||
]);
|
||||
|
||||
/** MCP servers map: server name → config */
|
||||
export const McpServersSchema = z.record(z.string(), McpServerConfigSchema).optional();
|
||||
@ -53,6 +53,31 @@ export interface OutputContractItem {
|
||||
/** Union type for output contract entries */
|
||||
export type OutputContractEntry = OutputContractLabelPath | OutputContractItem;
|
||||
|
||||
/** MCP server configuration for stdio transport */
|
||||
export interface McpStdioServerConfig {
|
||||
type?: 'stdio';
|
||||
command: string;
|
||||
args?: string[];
|
||||
env?: Record<string, string>;
|
||||
}
|
||||
|
||||
/** MCP server configuration for SSE transport */
|
||||
export interface McpSseServerConfig {
|
||||
type: 'sse';
|
||||
url: string;
|
||||
headers?: Record<string, string>;
|
||||
}
|
||||
|
||||
/** MCP server configuration for HTTP transport */
|
||||
export interface McpHttpServerConfig {
|
||||
type: 'http';
|
||||
url: string;
|
||||
headers?: Record<string, string>;
|
||||
}
|
||||
|
||||
/** MCP server configuration (union of all YAML-configurable transports) */
|
||||
export type McpServerConfig = McpStdioServerConfig | McpSseServerConfig | McpHttpServerConfig;
|
||||
|
||||
/** Single movement in a piece */
|
||||
export interface PieceMovement {
|
||||
name: string;
|
||||
@ -66,6 +91,8 @@ export interface PieceMovement {
|
||||
personaDisplayName: string;
|
||||
/** Allowed tools for this movement (optional, passed to agent execution) */
|
||||
allowedTools?: string[];
|
||||
/** MCP servers configuration for this movement */
|
||||
mcpServers?: Record<string, McpServerConfig>;
|
||||
/** Resolved absolute path to persona prompt file (set by loader) */
|
||||
personaPath?: string;
|
||||
/** Provider override for this movement */
|
||||
|
||||
@ -6,6 +6,9 @@
|
||||
|
||||
import { z } from 'zod/v4';
|
||||
import { DEFAULT_LANGUAGE } from '../../shared/constants.js';
|
||||
import { McpServersSchema } from './mcp-schemas.js';
|
||||
|
||||
export { McpServerConfigSchema, McpServersSchema } from './mcp-schemas.js';
|
||||
|
||||
/** Agent model schema (opus, sonnet, haiku) */
|
||||
export const AgentModelSchema = z.enum(['opus', 'sonnet', 'haiku']).default('sonnet');
|
||||
@ -137,6 +140,7 @@ export const ParallelSubMovementRawSchema = z.object({
|
||||
/** Knowledge reference(s) — key name(s) from piece-level knowledge map */
|
||||
knowledge: z.union([z.string(), z.array(z.string())]).optional(),
|
||||
allowed_tools: z.array(z.string()).optional(),
|
||||
mcp_servers: McpServersSchema,
|
||||
provider: z.enum(['claude', 'codex', 'mock']).optional(),
|
||||
model: z.string().optional(),
|
||||
permission_mode: PermissionModeSchema.optional(),
|
||||
@ -166,6 +170,7 @@ export const PieceMovementRawSchema = z.object({
|
||||
/** Knowledge reference(s) — key name(s) from piece-level knowledge map */
|
||||
knowledge: z.union([z.string(), z.array(z.string())]).optional(),
|
||||
allowed_tools: z.array(z.string()).optional(),
|
||||
mcp_servers: McpServersSchema,
|
||||
provider: z.enum(['claude', 'codex', 'mock']).optional(),
|
||||
model: z.string().optional(),
|
||||
/** Permission mode for tool execution in this movement */
|
||||
@ -311,6 +316,12 @@ export const GlobalConfigSchema = z.object({
|
||||
branch_name_strategy: z.enum(['romaji', 'ai']).optional(),
|
||||
/** Prevent macOS idle sleep during takt execution using caffeinate (default: false) */
|
||||
prevent_sleep: z.boolean().optional(),
|
||||
/** Enable notification sounds (default: true when undefined) */
|
||||
notification_sound: z.boolean().optional(),
|
||||
/** Number of movement previews to inject into interactive mode (0 to disable, max 10) */
|
||||
interactive_preview_movements: z.number().int().min(0).max(10).optional().default(3),
|
||||
/** Number of tasks to run concurrently in takt run (default: 1 = sequential, max: 10) */
|
||||
concurrency: z.number().int().min(1).max(10).optional().default(1),
|
||||
});
|
||||
|
||||
/** Project config schema */
|
||||
|
||||
@ -29,6 +29,7 @@ export type {
|
||||
OutputContractLabelPath,
|
||||
OutputContractItem,
|
||||
OutputContractEntry,
|
||||
McpServerConfig,
|
||||
PieceMovement,
|
||||
LoopDetectionConfig,
|
||||
LoopMonitorConfig,
|
||||
|
||||
@ -19,6 +19,7 @@ import { runAgent } from '../../../agents/runner.js';
|
||||
import { InstructionBuilder, isOutputContractItem } from '../instruction/InstructionBuilder.js';
|
||||
import { needsStatusJudgmentPhase, runReportPhase, runStatusJudgmentPhase } from '../phase-runner.js';
|
||||
import { detectMatchedRule } from '../evaluation/index.js';
|
||||
import { buildSessionKey } from '../session-key.js';
|
||||
import { incrementMovementIteration, getPreviousOutput } from './state-manager.js';
|
||||
import { createLogger } from '../../../shared/utils/index.js';
|
||||
import type { OptionsBuilder } from './OptionsBuilder.js';
|
||||
@ -100,7 +101,7 @@ export class MovementExecutor {
|
||||
? state.movementIterations.get(step.name) ?? 1
|
||||
: incrementMovementIteration(state, step.name);
|
||||
const instruction = prebuiltInstruction ?? this.buildInstruction(step, movementIteration, state, task, maxIterations);
|
||||
const sessionKey = step.persona ?? step.name;
|
||||
const sessionKey = buildSessionKey(step);
|
||||
log.debug('Running movement', {
|
||||
movement: step.name,
|
||||
persona: step.persona ?? '(none)',
|
||||
|
||||
@ -10,6 +10,7 @@ import type { PieceMovement, PieceState, Language } from '../../models/types.js'
|
||||
import type { RunAgentOptions } from '../../../agents/runner.js';
|
||||
import type { PhaseRunnerContext } from '../phase-runner.js';
|
||||
import type { PieceEngineOptions, PhaseName } from '../types.js';
|
||||
import { buildSessionKey } from '../session-key.js';
|
||||
|
||||
export class OptionsBuilder {
|
||||
constructor(
|
||||
@ -66,8 +67,9 @@ export class OptionsBuilder {
|
||||
|
||||
return {
|
||||
...this.buildBaseOptions(step),
|
||||
sessionId: shouldResumeSession ? this.getSessionId(step.persona ?? step.name) : undefined,
|
||||
sessionId: shouldResumeSession ? this.getSessionId(buildSessionKey(step)) : undefined,
|
||||
allowedTools,
|
||||
mcpServers: step.mcpServers,
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
@ -15,7 +15,8 @@ import { ParallelLogger } from './parallel-logger.js';
|
||||
import { needsStatusJudgmentPhase, runReportPhase, runStatusJudgmentPhase } from '../phase-runner.js';
|
||||
import { detectMatchedRule } from '../evaluation/index.js';
|
||||
import { incrementMovementIteration } from './state-manager.js';
|
||||
import { createLogger } from '../../../shared/utils/index.js';
|
||||
import { createLogger, getErrorMessage } from '../../../shared/utils/index.js';
|
||||
import { buildSessionKey } from '../session-key.js';
|
||||
import type { OptionsBuilder } from './OptionsBuilder.js';
|
||||
import type { MovementExecutor } from './MovementExecutor.js';
|
||||
import type { PieceEngineOptions, PhaseName } from '../types.js';
|
||||
@ -86,12 +87,17 @@ export class ParallelRunner {
|
||||
callAiJudge: this.deps.callAiJudge,
|
||||
};
|
||||
|
||||
// Run all sub-movements concurrently
|
||||
const subResults = await Promise.all(
|
||||
// Run all sub-movements concurrently (failures are captured, not thrown)
|
||||
const settled = await Promise.allSettled(
|
||||
subMovements.map(async (subMovement, index) => {
|
||||
const subIteration = incrementMovementIteration(state, subMovement.name);
|
||||
const subInstruction = this.deps.movementExecutor.buildInstruction(subMovement, subIteration, state, task, maxIterations);
|
||||
|
||||
// Session key uses buildSessionKey (persona:provider) — same as normal movements.
|
||||
// This ensures sessions are shared across movements with the same persona+provider,
|
||||
// while different providers (e.g., claude-eye vs codex-eye) get separate sessions.
|
||||
const subSessionKey = buildSessionKey(subMovement);
|
||||
|
||||
// Phase 1: main execution (Write excluded if sub-movement has report)
|
||||
const baseOptions = this.deps.optionsBuilder.buildAgentOptions(subMovement);
|
||||
|
||||
@ -100,13 +106,12 @@ export class ParallelRunner {
|
||||
? { ...baseOptions, onStream: parallelLogger.createStreamHandler(subMovement.name, index) }
|
||||
: baseOptions;
|
||||
|
||||
const subSessionKey = subMovement.persona ?? subMovement.name;
|
||||
this.deps.onPhaseStart?.(subMovement, 1, 'execute', subInstruction);
|
||||
const subResponse = await runAgent(subMovement.persona, subInstruction, agentOptions);
|
||||
updatePersonaSession(subSessionKey, subResponse.sessionId);
|
||||
this.deps.onPhaseComplete?.(subMovement, 1, 'execute', subResponse.content, subResponse.status, subResponse.error);
|
||||
|
||||
// Build phase context for this sub-movement with its lastResponse
|
||||
// Phase 2/3 context — no overrides needed, phase-runner uses buildSessionKey internally
|
||||
const phaseCtx = this.deps.optionsBuilder.buildPhaseRunnerContext(state, subResponse.content, updatePersonaSession, this.deps.onPhaseStart, this.deps.onPhaseComplete);
|
||||
|
||||
// Phase 2: report output for sub-movement
|
||||
@ -132,6 +137,32 @@ export class ParallelRunner {
|
||||
}),
|
||||
);
|
||||
|
||||
// Map settled results: fulfilled → as-is, rejected → blocked AgentResponse
|
||||
const subResults = settled.map((result, index) => {
|
||||
if (result.status === 'fulfilled') {
|
||||
return result.value;
|
||||
}
|
||||
const failedMovement = subMovements[index]!;
|
||||
const errorMsg = getErrorMessage(result.reason);
|
||||
log.error('Sub-movement failed', { movement: failedMovement.name, error: errorMsg });
|
||||
const blockedResponse: AgentResponse = {
|
||||
persona: failedMovement.name,
|
||||
status: 'blocked',
|
||||
content: '',
|
||||
timestamp: new Date(),
|
||||
error: errorMsg,
|
||||
};
|
||||
state.movementOutputs.set(failedMovement.name, blockedResponse);
|
||||
return { subMovement: failedMovement, response: blockedResponse, instruction: '' };
|
||||
});
|
||||
|
||||
// If all sub-movements failed (error-originated), throw
|
||||
const allFailed = subResults.every(r => r.response.error != null);
|
||||
if (allFailed) {
|
||||
const errors = subResults.map(r => `${r.subMovement.name}: ${r.response.error}`).join('; ');
|
||||
throw new Error(`All parallel sub-movements failed: ${errors}`);
|
||||
}
|
||||
|
||||
// Print completion summary
|
||||
if (parallelLogger) {
|
||||
parallelLogger.printSummary(
|
||||
|
||||
@ -14,6 +14,7 @@ import { ReportInstructionBuilder } from './instruction/ReportInstructionBuilder
|
||||
import { hasTagBasedRules, getReportFiles } from './evaluation/rule-utils.js';
|
||||
import { JudgmentStrategyFactory, type JudgmentContext } from './judgment/index.js';
|
||||
import { createLogger } from '../../shared/utils/index.js';
|
||||
import { buildSessionKey } from './session-key.js';
|
||||
|
||||
const log = createLogger('phase-runner');
|
||||
|
||||
@ -75,7 +76,7 @@ export async function runReportPhase(
|
||||
movementIteration: number,
|
||||
ctx: PhaseRunnerContext,
|
||||
): Promise<void> {
|
||||
const sessionKey = step.persona ?? step.name;
|
||||
const sessionKey = buildSessionKey(step);
|
||||
let currentSessionId = ctx.getSessionId(sessionKey);
|
||||
if (!currentSessionId) {
|
||||
throw new Error(`Report phase requires a session to resume, but no sessionId found for persona "${sessionKey}" in movement "${step.name}"`);
|
||||
@ -159,7 +160,7 @@ export async function runStatusJudgmentPhase(
|
||||
|
||||
// フォールバック戦略を順次試行(AutoSelectStrategy含む)
|
||||
const strategies = JudgmentStrategyFactory.createStrategies();
|
||||
const sessionKey = step.persona ?? step.name;
|
||||
const sessionKey = buildSessionKey(step);
|
||||
const judgmentContext: JudgmentContext = {
|
||||
step,
|
||||
cwd: ctx.cwd,
|
||||
|
||||
29
src/core/piece/session-key.ts
Normal file
29
src/core/piece/session-key.ts
Normal file
@ -0,0 +1,29 @@
|
||||
/**
|
||||
* Session key generation for persona sessions.
|
||||
*
|
||||
* When multiple movements share the same persona but use different providers
|
||||
* (e.g., claude-eye uses Claude, codex-eye uses Codex, both with persona "coder"),
|
||||
* sessions must be keyed by provider to prevent cross-provider contamination.
|
||||
*
|
||||
* Without provider in the key, a Codex session ID could overwrite a Claude session,
|
||||
* causing Claude to attempt resuming a non-existent session file (exit code 1).
|
||||
*/
|
||||
|
||||
import type { PieceMovement } from '../models/types.js';
|
||||
|
||||
/**
|
||||
* Build a unique session key for a movement.
|
||||
*
|
||||
* - Base key: `step.persona ?? step.name`
|
||||
* - If the movement specifies a provider, appends `:{provider}` to disambiguate
|
||||
*
|
||||
* Examples:
|
||||
* - persona="coder", provider=undefined → "coder"
|
||||
* - persona="coder", provider="claude" → "coder:claude"
|
||||
* - persona="coder", provider="codex" → "coder:codex"
|
||||
* - persona=undefined, name="plan" → "plan"
|
||||
*/
|
||||
export function buildSessionKey(step: PieceMovement): string {
|
||||
const base = step.persona ?? step.name;
|
||||
return step.provider ? `${base}:${step.provider}` : base;
|
||||
}
|
||||
178
src/features/catalog/catalogFacets.ts
Normal file
178
src/features/catalog/catalogFacets.ts
Normal file
@ -0,0 +1,178 @@
|
||||
/**
|
||||
* Facet catalog — scan and display available facets across 3 layers.
|
||||
*
|
||||
* Scans builtin, user (~/.takt/), and project (.takt/) directories
|
||||
* for facet files (.md) and displays them with layer provenance.
|
||||
*/
|
||||
|
||||
import { existsSync, readdirSync, readFileSync } from 'node:fs';
|
||||
import { join, basename } from 'node:path';
|
||||
import chalk from 'chalk';
|
||||
import type { PieceSource } from '../../infra/config/loaders/pieceResolver.js';
|
||||
import { getLanguageResourcesDir } from '../../infra/resources/index.js';
|
||||
import { getGlobalConfigDir, getProjectConfigDir } from '../../infra/config/paths.js';
|
||||
import { getLanguage, getBuiltinPiecesEnabled } from '../../infra/config/global/globalConfig.js';
|
||||
import { section, error as logError, info } from '../../shared/ui/index.js';
|
||||
|
||||
const FACET_TYPES = [
|
||||
'personas',
|
||||
'policies',
|
||||
'knowledge',
|
||||
'instructions',
|
||||
'output-contracts',
|
||||
] as const;
|
||||
|
||||
export type FacetType = (typeof FACET_TYPES)[number];
|
||||
|
||||
export interface FacetEntry {
|
||||
name: string;
|
||||
description: string;
|
||||
source: PieceSource;
|
||||
overriddenBy?: PieceSource;
|
||||
}
|
||||
|
||||
/** Validate a string as a FacetType. Returns the type or null. */
|
||||
export function parseFacetType(input: string): FacetType | null {
|
||||
if ((FACET_TYPES as readonly string[]).includes(input)) {
|
||||
return input as FacetType;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract description from a markdown file.
|
||||
* Returns the first `# ` heading text, or falls back to the first non-empty line.
|
||||
*/
|
||||
export function extractDescription(filePath: string): string {
|
||||
const content = readFileSync(filePath, 'utf-8');
|
||||
let firstNonEmpty = '';
|
||||
for (const line of content.split('\n')) {
|
||||
if (line.startsWith('# ')) {
|
||||
return line.slice(2).trim();
|
||||
}
|
||||
if (!firstNonEmpty && line.trim()) {
|
||||
firstNonEmpty = line.trim();
|
||||
}
|
||||
}
|
||||
return firstNonEmpty;
|
||||
}
|
||||
|
||||
/** Build the 3-layer directory list for a given facet type. */
|
||||
function getFacetDirs(
|
||||
facetType: FacetType,
|
||||
cwd: string,
|
||||
): { dir: string; source: PieceSource }[] {
|
||||
const dirs: { dir: string; source: PieceSource }[] = [];
|
||||
|
||||
if (getBuiltinPiecesEnabled()) {
|
||||
const lang = getLanguage();
|
||||
dirs.push({ dir: join(getLanguageResourcesDir(lang), facetType), source: 'builtin' });
|
||||
}
|
||||
|
||||
dirs.push({ dir: join(getGlobalConfigDir(), facetType), source: 'user' });
|
||||
dirs.push({ dir: join(getProjectConfigDir(cwd), facetType), source: 'project' });
|
||||
|
||||
return dirs;
|
||||
}
|
||||
|
||||
/** Scan a single directory for .md facet files. */
|
||||
function scanDirectory(dir: string): string[] {
|
||||
if (!existsSync(dir)) return [];
|
||||
return readdirSync(dir).filter((f) => f.endsWith('.md'));
|
||||
}
|
||||
|
||||
/**
|
||||
* Scan all layers for facets of a given type.
|
||||
*
|
||||
* Scans builtin → user → project in order.
|
||||
* When a facet name appears in a higher-priority layer, the lower-priority
|
||||
* entry gets `overriddenBy` set to the overriding layer.
|
||||
*/
|
||||
export function scanFacets(facetType: FacetType, cwd: string): FacetEntry[] {
|
||||
const dirs = getFacetDirs(facetType, cwd);
|
||||
const entriesByName = new Map<string, FacetEntry>();
|
||||
const allEntries: FacetEntry[] = [];
|
||||
|
||||
for (const { dir, source } of dirs) {
|
||||
const files = scanDirectory(dir);
|
||||
for (const file of files) {
|
||||
const name = basename(file, '.md');
|
||||
const description = extractDescription(join(dir, file));
|
||||
const entry: FacetEntry = { name, description, source };
|
||||
|
||||
const existing = entriesByName.get(name);
|
||||
if (existing) {
|
||||
existing.overriddenBy = source;
|
||||
}
|
||||
|
||||
entriesByName.set(name, entry);
|
||||
allEntries.push(entry);
|
||||
}
|
||||
}
|
||||
|
||||
return allEntries;
|
||||
}
|
||||
|
||||
/** Color a source tag for terminal display. */
|
||||
function colorSourceTag(source: PieceSource): string {
|
||||
switch (source) {
|
||||
case 'builtin':
|
||||
return chalk.gray(`[${source}]`);
|
||||
case 'user':
|
||||
return chalk.yellow(`[${source}]`);
|
||||
case 'project':
|
||||
return chalk.green(`[${source}]`);
|
||||
}
|
||||
}
|
||||
|
||||
/** Format and print a list of facet entries for one facet type. */
|
||||
export function displayFacets(facetType: FacetType, entries: FacetEntry[]): void {
|
||||
section(`${capitalize(facetType)}:`);
|
||||
|
||||
if (entries.length === 0) {
|
||||
console.log(chalk.gray(' (none)'));
|
||||
return;
|
||||
}
|
||||
|
||||
const maxNameLen = Math.max(...entries.map((e) => e.name.length));
|
||||
const maxDescLen = Math.max(...entries.map((e) => e.description.length));
|
||||
|
||||
for (const entry of entries) {
|
||||
const name = entry.name.padEnd(maxNameLen + 2);
|
||||
const desc = entry.description.padEnd(maxDescLen + 2);
|
||||
const tag = colorSourceTag(entry.source);
|
||||
const override = entry.overriddenBy
|
||||
? chalk.gray(` (overridden by ${entry.overriddenBy})`)
|
||||
: '';
|
||||
console.log(` ${name}${chalk.dim(desc)}${tag}${override}`);
|
||||
}
|
||||
}
|
||||
|
||||
function capitalize(s: string): string {
|
||||
return s.charAt(0).toUpperCase() + s.slice(1);
|
||||
}
|
||||
|
||||
/**
|
||||
* Main entry point: show facet catalog.
|
||||
*
|
||||
* If facetType is provided, shows only that type.
|
||||
* Otherwise shows all facet types.
|
||||
*/
|
||||
export function showCatalog(cwd: string, facetType?: string): void {
|
||||
if (facetType !== undefined) {
|
||||
const parsed = parseFacetType(facetType);
|
||||
if (!parsed) {
|
||||
logError(`Unknown facet type: "${facetType}"`);
|
||||
info(`Available types: ${FACET_TYPES.join(', ')}`);
|
||||
return;
|
||||
}
|
||||
const entries = scanFacets(parsed, cwd);
|
||||
displayFacets(parsed, entries);
|
||||
return;
|
||||
}
|
||||
|
||||
for (const type of FACET_TYPES) {
|
||||
const entries = scanFacets(type, cwd);
|
||||
displayFacets(type, entries);
|
||||
}
|
||||
}
|
||||
5
src/features/catalog/index.ts
Normal file
5
src/features/catalog/index.ts
Normal file
@ -0,0 +1,5 @@
|
||||
/**
|
||||
* Catalog feature — list available facets across layers.
|
||||
*/
|
||||
|
||||
export { showCatalog } from './catalogFacets.js';
|
||||
@ -1,8 +1,9 @@
|
||||
/**
|
||||
* /eject command implementation
|
||||
*
|
||||
* Copies a builtin piece (and its personas/policies/instructions) for user customization.
|
||||
* Directory structure is mirrored so relative paths work as-is.
|
||||
* Copies a builtin piece YAML for user customization.
|
||||
* Also supports ejecting individual facets (persona, policy, etc.)
|
||||
* to override builtins via layer resolution.
|
||||
*
|
||||
* Default target: project-local (.takt/)
|
||||
* With --global: user global (~/.takt/)
|
||||
@ -10,35 +11,54 @@
|
||||
|
||||
import { existsSync, readdirSync, statSync, readFileSync, writeFileSync, mkdirSync } from 'node:fs';
|
||||
import { join, dirname } from 'node:path';
|
||||
import type { FacetType } from '../../infra/config/paths.js';
|
||||
import {
|
||||
getGlobalPiecesDir,
|
||||
getGlobalPersonasDir,
|
||||
getProjectPiecesDir,
|
||||
getProjectPersonasDir,
|
||||
getBuiltinPiecesDir,
|
||||
getProjectFacetDir,
|
||||
getGlobalFacetDir,
|
||||
getBuiltinFacetDir,
|
||||
getLanguage,
|
||||
} from '../../infra/config/index.js';
|
||||
import { getLanguageResourcesDir } from '../../infra/resources/index.js';
|
||||
import { header, success, info, warn, error, blankLine } from '../../shared/ui/index.js';
|
||||
|
||||
export interface EjectOptions {
|
||||
global?: boolean;
|
||||
projectDir?: string;
|
||||
projectDir: string;
|
||||
}
|
||||
|
||||
/** Singular CLI facet type names mapped to directory (plural) FacetType */
|
||||
const FACET_TYPE_MAP: Record<string, FacetType> = {
|
||||
persona: 'personas',
|
||||
policy: 'policies',
|
||||
knowledge: 'knowledge',
|
||||
instruction: 'instructions',
|
||||
'output-contract': 'output-contracts',
|
||||
};
|
||||
|
||||
/** Valid singular facet type names for CLI */
|
||||
export const VALID_FACET_TYPES = Object.keys(FACET_TYPE_MAP);
|
||||
|
||||
/**
|
||||
* Parse singular CLI facet type to plural directory FacetType.
|
||||
* Returns undefined if the input is not a valid facet type.
|
||||
*/
|
||||
export function parseFacetType(singular: string): FacetType | undefined {
|
||||
return FACET_TYPE_MAP[singular];
|
||||
}
|
||||
|
||||
/**
|
||||
* Eject a builtin piece to project or global space for customization.
|
||||
* Copies the piece YAML and related agent .md files, preserving
|
||||
* the directory structure so relative paths continue to work.
|
||||
* Eject a builtin piece YAML to project or global space for customization.
|
||||
* Only copies the piece YAML — facets are resolved via layer system.
|
||||
*/
|
||||
export async function ejectBuiltin(name?: string, options: EjectOptions = {}): Promise<void> {
|
||||
export async function ejectBuiltin(name: string | undefined, options: EjectOptions): Promise<void> {
|
||||
header('Eject Builtin');
|
||||
|
||||
const lang = getLanguage();
|
||||
const builtinPiecesDir = getBuiltinPiecesDir(lang);
|
||||
|
||||
if (!name) {
|
||||
// List available builtins
|
||||
listAvailableBuiltins(builtinPiecesDir, options.global);
|
||||
return;
|
||||
}
|
||||
@ -50,16 +70,12 @@ export async function ejectBuiltin(name?: string, options: EjectOptions = {}): P
|
||||
return;
|
||||
}
|
||||
|
||||
const projectDir = options.projectDir || process.cwd();
|
||||
const targetPiecesDir = options.global ? getGlobalPiecesDir() : getProjectPiecesDir(projectDir);
|
||||
const targetBaseDir = options.global ? dirname(getGlobalPersonasDir()) : dirname(getProjectPersonasDir(projectDir));
|
||||
const builtinBaseDir = getLanguageResourcesDir(lang);
|
||||
const targetPiecesDir = options.global ? getGlobalPiecesDir() : getProjectPiecesDir(options.projectDir);
|
||||
const targetLabel = options.global ? 'global (~/.takt/)' : 'project (.takt/)';
|
||||
|
||||
info(`Ejecting to ${targetLabel}`);
|
||||
info(`Ejecting piece YAML to ${targetLabel}`);
|
||||
blankLine();
|
||||
|
||||
// Copy piece YAML as-is (no path rewriting — directory structure mirrors builtin)
|
||||
const pieceDest = join(targetPiecesDir, `${name}.yaml`);
|
||||
if (existsSync(pieceDest)) {
|
||||
warn(`User piece already exists: ${pieceDest}`);
|
||||
@ -70,31 +86,49 @@ export async function ejectBuiltin(name?: string, options: EjectOptions = {}): P
|
||||
writeFileSync(pieceDest, content, 'utf-8');
|
||||
success(`Ejected piece: ${pieceDest}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Copy related resource files (personas, policies, instructions, output-contracts)
|
||||
const resourceRefs = extractResourceRelativePaths(builtinPath);
|
||||
let copiedCount = 0;
|
||||
/**
|
||||
* Eject an individual facet from builtin to upper layer for customization.
|
||||
* Copies the builtin facet .md file to project (.takt/{type}/) or global (~/.takt/{type}/).
|
||||
*/
|
||||
export async function ejectFacet(
|
||||
facetType: FacetType,
|
||||
name: string,
|
||||
options: EjectOptions,
|
||||
): Promise<void> {
|
||||
header('Eject Facet');
|
||||
|
||||
for (const ref of resourceRefs) {
|
||||
const srcPath = join(builtinBaseDir, ref.type, ref.path);
|
||||
const destPath = join(targetBaseDir, ref.type, ref.path);
|
||||
const lang = getLanguage();
|
||||
const builtinDir = getBuiltinFacetDir(lang, facetType);
|
||||
const srcPath = join(builtinDir, `${name}.md`);
|
||||
|
||||
if (!existsSync(srcPath)) continue;
|
||||
if (!existsSync(srcPath)) {
|
||||
error(`Builtin ${facetType}/${name}.md not found`);
|
||||
info(`Available ${facetType}:`);
|
||||
listAvailableFacets(builtinDir);
|
||||
return;
|
||||
}
|
||||
|
||||
const targetDir = options.global
|
||||
? getGlobalFacetDir(facetType)
|
||||
: getProjectFacetDir(options.projectDir, facetType);
|
||||
const targetLabel = options.global ? 'global (~/.takt/)' : 'project (.takt/)';
|
||||
const destPath = join(targetDir, `${name}.md`);
|
||||
|
||||
info(`Ejecting ${facetType}/${name} to ${targetLabel}`);
|
||||
blankLine();
|
||||
|
||||
if (existsSync(destPath)) {
|
||||
info(` Already exists: ${destPath}`);
|
||||
continue;
|
||||
warn(`Already exists: ${destPath}`);
|
||||
warn('Skipping copy (existing file takes priority).');
|
||||
return;
|
||||
}
|
||||
|
||||
mkdirSync(dirname(destPath), { recursive: true });
|
||||
writeFileSync(destPath, readFileSync(srcPath));
|
||||
info(` ${destPath}`);
|
||||
copiedCount++;
|
||||
}
|
||||
|
||||
if (copiedCount > 0) {
|
||||
success(`${copiedCount} resource file(s) ejected.`);
|
||||
}
|
||||
const content = readFileSync(srcPath, 'utf-8');
|
||||
writeFileSync(destPath, content, 'utf-8');
|
||||
success(`Ejected: ${destPath}`);
|
||||
}
|
||||
|
||||
/** List available builtin pieces for ejection */
|
||||
@ -118,48 +152,23 @@ function listAvailableBuiltins(builtinPiecesDir: string, isGlobal?: boolean): vo
|
||||
blankLine();
|
||||
const globalFlag = isGlobal ? ' --global' : '';
|
||||
info(`Usage: takt eject {name}${globalFlag}`);
|
||||
info(` Eject individual facet: takt eject {type} {name}${globalFlag}`);
|
||||
info(` Types: ${VALID_FACET_TYPES.join(', ')}`);
|
||||
if (!isGlobal) {
|
||||
info(' Add --global to eject to ~/.takt/ instead of .takt/');
|
||||
}
|
||||
}
|
||||
|
||||
/** Resource reference extracted from piece YAML */
|
||||
interface ResourceRef {
|
||||
/** Resource type directory (personas, policies, instructions, output-contracts) */
|
||||
type: string;
|
||||
/** Relative path within the resource type directory */
|
||||
path: string;
|
||||
/** List available facet files in a builtin directory */
|
||||
function listAvailableFacets(builtinDir: string): void {
|
||||
if (!existsSync(builtinDir)) {
|
||||
info(' (none)');
|
||||
return;
|
||||
}
|
||||
|
||||
/** Known resource type directories that can be referenced from piece YAML */
|
||||
const RESOURCE_TYPES = ['personas', 'policies', 'knowledge', 'instructions', 'output-contracts'];
|
||||
|
||||
/**
|
||||
* Extract resource relative paths from a builtin piece YAML.
|
||||
* Matches `../{type}/{path}` patterns for all known resource types.
|
||||
*/
|
||||
function extractResourceRelativePaths(piecePath: string): ResourceRef[] {
|
||||
const content = readFileSync(piecePath, 'utf-8');
|
||||
const seen = new Set<string>();
|
||||
const refs: ResourceRef[] = [];
|
||||
const typePattern = RESOURCE_TYPES.join('|');
|
||||
const regex = new RegExp(`\\.\\.\\/(?:${typePattern})\\/(.+)`, 'g');
|
||||
|
||||
let match: RegExpExecArray | null;
|
||||
while ((match = regex.exec(content)) !== null) {
|
||||
// Re-parse to extract type and path separately
|
||||
const fullMatch = match[0];
|
||||
const typeMatch = fullMatch.match(/\.\.\/([^/]+)\/(.+)/);
|
||||
if (typeMatch?.[1] && typeMatch[2]) {
|
||||
const type = typeMatch[1];
|
||||
const path = typeMatch[2].trim();
|
||||
const key = `${type}/${path}`;
|
||||
if (!seen.has(key)) {
|
||||
seen.add(key);
|
||||
refs.push({ type, path });
|
||||
for (const entry of readdirSync(builtinDir).sort()) {
|
||||
if (!entry.endsWith('.md')) continue;
|
||||
if (!statSync(join(builtinDir, entry)).isFile()) continue;
|
||||
info(` ${entry.replace(/\.md$/, '')}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return refs;
|
||||
}
|
||||
|
||||
@ -4,6 +4,6 @@
|
||||
|
||||
export { switchPiece } from './switchPiece.js';
|
||||
export { switchConfig, getCurrentPermissionMode, setPermissionMode, type PermissionMode } from './switchConfig.js';
|
||||
export { ejectBuiltin } from './ejectBuiltin.js';
|
||||
export { ejectBuiltin, ejectFacet, parseFacetType, VALID_FACET_TYPES } from './ejectBuiltin.js';
|
||||
export { resetCategoriesToDefault } from './resetCategories.js';
|
||||
export { deploySkill } from './deploySkill.js';
|
||||
|
||||
@ -10,7 +10,6 @@
|
||||
* /cancel - Cancel and exit
|
||||
*/
|
||||
|
||||
import * as readline from 'node:readline';
|
||||
import chalk from 'chalk';
|
||||
import type { Language } from '../../core/models/index.js';
|
||||
import {
|
||||
@ -20,6 +19,7 @@ import {
|
||||
loadSessionState,
|
||||
clearSessionState,
|
||||
type SessionState,
|
||||
type MovementPreview,
|
||||
} from '../../infra/config/index.js';
|
||||
import { isQuietMode } from '../../shared/context.js';
|
||||
import { getProvider, type ProviderType } from '../../infra/providers/index.js';
|
||||
@ -28,6 +28,7 @@ import { createLogger, getErrorMessage } from '../../shared/utils/index.js';
|
||||
import { info, error, blankLine, StreamDisplay } from '../../shared/ui/index.js';
|
||||
import { loadTemplate } from '../../shared/prompts/index.js';
|
||||
import { getLabel, getLabelObject } from '../../shared/i18n/index.js';
|
||||
import { readMultilineInput } from './lineEditor.js';
|
||||
const log = createLogger('interactive');
|
||||
|
||||
/** Shape of interactive UI text */
|
||||
@ -90,8 +91,44 @@ function resolveLanguage(lang?: Language): 'en' | 'ja' {
|
||||
return lang === 'ja' ? 'ja' : 'en';
|
||||
}
|
||||
|
||||
/**
|
||||
* Format MovementPreview[] into a Markdown string for template injection.
|
||||
* Each movement is rendered with its persona and instruction content.
|
||||
*/
|
||||
export function formatMovementPreviews(previews: MovementPreview[], lang: 'en' | 'ja'): string {
|
||||
return previews.map((p, i) => {
|
||||
const toolsStr = p.allowedTools.length > 0
|
||||
? p.allowedTools.join(', ')
|
||||
: (lang === 'ja' ? 'なし' : 'None');
|
||||
const editStr = p.canEdit
|
||||
? (lang === 'ja' ? '可' : 'Yes')
|
||||
: (lang === 'ja' ? '不可' : 'No');
|
||||
const personaLabel = lang === 'ja' ? 'ペルソナ' : 'Persona';
|
||||
const instructionLabel = lang === 'ja' ? 'インストラクション' : 'Instruction';
|
||||
const toolsLabel = lang === 'ja' ? 'ツール' : 'Tools';
|
||||
const editLabel = lang === 'ja' ? '編集' : 'Edit';
|
||||
|
||||
const lines = [
|
||||
`### ${i + 1}. ${p.name} (${p.personaDisplayName})`,
|
||||
];
|
||||
if (p.personaContent) {
|
||||
lines.push(`**${personaLabel}:**`, p.personaContent);
|
||||
}
|
||||
if (p.instructionContent) {
|
||||
lines.push(`**${instructionLabel}:**`, p.instructionContent);
|
||||
}
|
||||
lines.push(`**${toolsLabel}:** ${toolsStr}`, `**${editLabel}:** ${editStr}`);
|
||||
return lines.join('\n');
|
||||
}).join('\n\n');
|
||||
}
|
||||
|
||||
function getInteractivePrompts(lang: 'en' | 'ja', pieceContext?: PieceContext) {
|
||||
const systemPrompt = loadTemplate('score_interactive_system_prompt', lang, {});
|
||||
const hasPreview = !!pieceContext?.movementPreviews?.length;
|
||||
const systemPrompt = loadTemplate('score_interactive_system_prompt', lang, {
|
||||
hasPiecePreview: hasPreview,
|
||||
pieceStructure: pieceContext?.pieceStructure ?? '',
|
||||
movementDetails: hasPreview ? formatMovementPreviews(pieceContext!.movementPreviews!, lang) : '',
|
||||
});
|
||||
const policyContent = loadTemplate('score_interactive_policy', lang, {});
|
||||
|
||||
return {
|
||||
@ -149,10 +186,15 @@ function buildSummaryPrompt(
|
||||
}
|
||||
|
||||
const hasPiece = !!pieceContext;
|
||||
const hasPreview = !!pieceContext?.movementPreviews?.length;
|
||||
const summaryMovementDetails = hasPreview
|
||||
? `\n### ${lang === 'ja' ? '処理するエージェント' : 'Processing Agents'}\n${formatMovementPreviews(pieceContext!.movementPreviews!, lang)}`
|
||||
: '';
|
||||
return loadTemplate('score_summary_system_prompt', lang, {
|
||||
pieceInfo: hasPiece,
|
||||
pieceName: pieceContext?.name ?? '',
|
||||
pieceDescription: pieceContext?.description ?? '',
|
||||
movementDetails: summaryMovementDetails,
|
||||
conversation,
|
||||
});
|
||||
}
|
||||
@ -176,251 +218,6 @@ async function selectPostSummaryAction(
|
||||
]);
|
||||
}
|
||||
|
||||
/** Escape sequences for terminal protocol control */
|
||||
const PASTE_BRACKET_ENABLE = '\x1B[?2004h';
|
||||
const PASTE_BRACKET_DISABLE = '\x1B[?2004l';
|
||||
// flag 1: Disambiguate escape codes — modified keys (e.g. Shift+Enter) are reported as CSI sequences while unmodified keys (e.g. Enter) remain as legacy codes (\r)
|
||||
const KITTY_KB_ENABLE = '\x1B[>1u';
|
||||
const KITTY_KB_DISABLE = '\x1B[<u';
|
||||
|
||||
/** Known escape sequence prefixes for matching */
|
||||
const ESC_PASTE_START = '[200~';
|
||||
const ESC_PASTE_END = '[201~';
|
||||
const ESC_SHIFT_ENTER = '[13;2u';
|
||||
|
||||
type InputState = 'normal' | 'paste';
|
||||
|
||||
/**
|
||||
* Decode Kitty CSI-u key sequence into a control character.
|
||||
* Example: "[99;5u" (Ctrl+C) -> "\x03"
|
||||
*/
|
||||
function decodeCtrlKey(rest: string): { ch: string; consumed: number } | null {
|
||||
// Kitty CSI-u: [codepoint;modifiersu
|
||||
const kittyMatch = rest.match(/^\[(\d+);(\d+)u/);
|
||||
if (kittyMatch) {
|
||||
const codepoint = Number.parseInt(kittyMatch[1]!, 10);
|
||||
const modifiers = Number.parseInt(kittyMatch[2]!, 10);
|
||||
// Kitty modifiers are 1-based; Ctrl bit is 4 in 0-based flags.
|
||||
const ctrlPressed = (((modifiers - 1) & 4) !== 0);
|
||||
if (!ctrlPressed) return null;
|
||||
|
||||
const key = String.fromCodePoint(codepoint);
|
||||
if (!/^[A-Za-z]$/.test(key)) return null;
|
||||
|
||||
const upper = key.toUpperCase();
|
||||
const controlCode = upper.charCodeAt(0) & 0x1f;
|
||||
return { ch: String.fromCharCode(controlCode), consumed: kittyMatch[0].length };
|
||||
}
|
||||
|
||||
// xterm modifyOtherKeys: [27;modifiers;codepoint~
|
||||
const xtermMatch = rest.match(/^\[27;(\d+);(\d+)~/);
|
||||
if (!xtermMatch) return null;
|
||||
|
||||
const modifiers = Number.parseInt(xtermMatch[1]!, 10);
|
||||
const codepoint = Number.parseInt(xtermMatch[2]!, 10);
|
||||
const ctrlPressed = (((modifiers - 1) & 4) !== 0);
|
||||
if (!ctrlPressed) return null;
|
||||
|
||||
const key = String.fromCodePoint(codepoint);
|
||||
if (!/^[A-Za-z]$/.test(key)) return null;
|
||||
|
||||
const upper = key.toUpperCase();
|
||||
const controlCode = upper.charCodeAt(0) & 0x1f;
|
||||
return { ch: String.fromCharCode(controlCode), consumed: xtermMatch[0].length };
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse raw stdin data and process each character/sequence.
|
||||
*
|
||||
* Handles escape sequences for paste bracket mode (start/end),
|
||||
* Kitty keyboard protocol (Shift+Enter), and arrow keys (ignored).
|
||||
* Regular characters are passed to the onChar callback.
|
||||
*/
|
||||
function parseInputData(
|
||||
data: string,
|
||||
callbacks: {
|
||||
onPasteStart: () => void;
|
||||
onPasteEnd: () => void;
|
||||
onShiftEnter: () => void;
|
||||
onChar: (ch: string) => void;
|
||||
},
|
||||
): void {
|
||||
let i = 0;
|
||||
while (i < data.length) {
|
||||
const ch = data[i]!;
|
||||
|
||||
if (ch === '\x1B') {
|
||||
// Try to match known escape sequences
|
||||
const rest = data.slice(i + 1);
|
||||
|
||||
if (rest.startsWith(ESC_PASTE_START)) {
|
||||
callbacks.onPasteStart();
|
||||
i += 1 + ESC_PASTE_START.length;
|
||||
continue;
|
||||
}
|
||||
if (rest.startsWith(ESC_PASTE_END)) {
|
||||
callbacks.onPasteEnd();
|
||||
i += 1 + ESC_PASTE_END.length;
|
||||
continue;
|
||||
}
|
||||
if (rest.startsWith(ESC_SHIFT_ENTER)) {
|
||||
callbacks.onShiftEnter();
|
||||
i += 1 + ESC_SHIFT_ENTER.length;
|
||||
continue;
|
||||
}
|
||||
const ctrlKey = decodeCtrlKey(rest);
|
||||
if (ctrlKey) {
|
||||
callbacks.onChar(ctrlKey.ch);
|
||||
i += 1 + ctrlKey.consumed;
|
||||
continue;
|
||||
}
|
||||
// Arrow keys and other CSI sequences: skip \x1B[ + letter/params
|
||||
if (rest.startsWith('[')) {
|
||||
const csiMatch = rest.match(/^\[[0-9;]*[A-Za-z~]/);
|
||||
if (csiMatch) {
|
||||
i += 1 + csiMatch[0].length;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
// Unrecognized escape: skip the \x1B
|
||||
i++;
|
||||
continue;
|
||||
}
|
||||
|
||||
callbacks.onChar(ch);
|
||||
i++;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Read multiline input from the user using raw mode.
|
||||
*
|
||||
* Supports:
|
||||
* - Enter (\r) to confirm and submit input
|
||||
* - Shift+Enter (Kitty keyboard protocol) to insert a newline
|
||||
* - Paste bracket mode for correctly handling pasted text with newlines
|
||||
* - Backspace (\x7F) to delete the last character
|
||||
* - Ctrl+C (\x03) and Ctrl+D (\x04) to cancel (returns null)
|
||||
*
|
||||
* Falls back to readline.question() in non-TTY environments.
|
||||
*/
|
||||
function readMultilineInput(prompt: string): Promise<string | null> {
|
||||
// Non-TTY fallback: use readline for pipe/CI environments
|
||||
if (!process.stdin.isTTY) {
|
||||
return new Promise((resolve) => {
|
||||
if (process.stdin.readable && !process.stdin.destroyed) {
|
||||
process.stdin.resume();
|
||||
}
|
||||
|
||||
const rl = readline.createInterface({
|
||||
input: process.stdin,
|
||||
output: process.stdout,
|
||||
});
|
||||
|
||||
let answered = false;
|
||||
|
||||
rl.question(prompt, (answer) => {
|
||||
answered = true;
|
||||
rl.close();
|
||||
resolve(answer);
|
||||
});
|
||||
|
||||
rl.on('close', () => {
|
||||
if (!answered) {
|
||||
resolve(null);
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
return new Promise((resolve) => {
|
||||
let buffer = '';
|
||||
let state: InputState = 'normal';
|
||||
|
||||
const wasRaw = process.stdin.isRaw;
|
||||
process.stdin.setRawMode(true);
|
||||
process.stdin.resume();
|
||||
|
||||
// Enable paste bracket mode and Kitty keyboard protocol
|
||||
process.stdout.write(PASTE_BRACKET_ENABLE);
|
||||
process.stdout.write(KITTY_KB_ENABLE);
|
||||
|
||||
// Display the prompt
|
||||
process.stdout.write(prompt);
|
||||
|
||||
function cleanup(): void {
|
||||
process.stdin.removeListener('data', onData);
|
||||
process.stdout.write(PASTE_BRACKET_DISABLE);
|
||||
process.stdout.write(KITTY_KB_DISABLE);
|
||||
process.stdin.setRawMode(wasRaw ?? false);
|
||||
process.stdin.pause();
|
||||
}
|
||||
|
||||
function onData(data: Buffer): void {
|
||||
try {
|
||||
const str = data.toString('utf-8');
|
||||
|
||||
parseInputData(str, {
|
||||
onPasteStart() {
|
||||
state = 'paste';
|
||||
},
|
||||
onPasteEnd() {
|
||||
state = 'normal';
|
||||
},
|
||||
onShiftEnter() {
|
||||
buffer += '\n';
|
||||
process.stdout.write('\n');
|
||||
},
|
||||
onChar(ch: string) {
|
||||
if (state === 'paste') {
|
||||
if (ch === '\r' || ch === '\n') {
|
||||
buffer += '\n';
|
||||
process.stdout.write('\n');
|
||||
} else {
|
||||
buffer += ch;
|
||||
process.stdout.write(ch);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
// NORMAL state
|
||||
if (ch === '\r') {
|
||||
// Enter: confirm input
|
||||
process.stdout.write('\n');
|
||||
cleanup();
|
||||
resolve(buffer);
|
||||
return;
|
||||
}
|
||||
if (ch === '\x03' || ch === '\x04') {
|
||||
// Ctrl+C or Ctrl+D: cancel
|
||||
process.stdout.write('\n');
|
||||
cleanup();
|
||||
resolve(null);
|
||||
return;
|
||||
}
|
||||
if (ch === '\x7F') {
|
||||
// Backspace: delete last character
|
||||
if (buffer.length > 0) {
|
||||
buffer = buffer.slice(0, -1);
|
||||
process.stdout.write('\b \b');
|
||||
}
|
||||
return;
|
||||
}
|
||||
// Regular character
|
||||
buffer += ch;
|
||||
process.stdout.write(ch);
|
||||
},
|
||||
});
|
||||
} catch {
|
||||
cleanup();
|
||||
resolve(null);
|
||||
}
|
||||
}
|
||||
|
||||
process.stdin.on('data', onData);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Call AI with the same pattern as piece execution.
|
||||
* The key requirement is passing onStream — the Agent SDK requires
|
||||
@ -465,6 +262,8 @@ export interface PieceContext {
|
||||
description: string;
|
||||
/** Piece structure (numbered list of movements) */
|
||||
pieceStructure: string;
|
||||
/** Movement previews (persona + instruction content for first N movements) */
|
||||
movementPreviews?: MovementPreview[];
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
565
src/features/interactive/lineEditor.ts
Normal file
565
src/features/interactive/lineEditor.ts
Normal file
@ -0,0 +1,565 @@
|
||||
/**
|
||||
* Line editor with cursor management for raw-mode terminal input.
|
||||
*
|
||||
* Handles:
|
||||
* - Escape sequence parsing (Kitty keyboard protocol, paste bracket mode)
|
||||
* - Cursor-aware buffer editing (insert, delete, move)
|
||||
* - Terminal rendering via ANSI escape sequences
|
||||
*/
|
||||
|
||||
import * as readline from 'node:readline';
|
||||
import { StringDecoder } from 'node:string_decoder';
|
||||
import { stripAnsi, getDisplayWidth } from '../../shared/utils/text.js';
|
||||
|
||||
/** Escape sequences for terminal protocol control */
|
||||
const PASTE_BRACKET_ENABLE = '\x1B[?2004h';
|
||||
const PASTE_BRACKET_DISABLE = '\x1B[?2004l';
|
||||
// flag 1: Disambiguate escape codes — modified keys (e.g. Shift+Enter) are reported
|
||||
// as CSI sequences while unmodified keys (e.g. Enter) remain as legacy codes (\r)
|
||||
const KITTY_KB_ENABLE = '\x1B[>1u';
|
||||
const KITTY_KB_DISABLE = '\x1B[<u';
|
||||
|
||||
/** Known escape sequence prefixes for matching */
|
||||
const ESC_PASTE_START = '[200~';
|
||||
const ESC_PASTE_END = '[201~';
|
||||
const ESC_SHIFT_ENTER = '[13;2u';
|
||||
|
||||
type InputState = 'normal' | 'paste';
|
||||
|
||||
/**
|
||||
* Decode Kitty CSI-u key sequence into a control character.
|
||||
* Example: "[99;5u" (Ctrl+C) -> "\x03"
|
||||
*/
|
||||
function decodeCtrlKey(rest: string): { ch: string; consumed: number } | null {
|
||||
// Kitty CSI-u: [codepoint;modifiersu
|
||||
const kittyMatch = rest.match(/^\[(\d+);(\d+)u/);
|
||||
if (kittyMatch) {
|
||||
const codepoint = Number.parseInt(kittyMatch[1]!, 10);
|
||||
const modifiers = Number.parseInt(kittyMatch[2]!, 10);
|
||||
// Kitty modifiers are 1-based; Ctrl bit is 4 in 0-based flags.
|
||||
const ctrlPressed = ((modifiers - 1) & 4) !== 0;
|
||||
if (!ctrlPressed) return null;
|
||||
|
||||
const key = String.fromCodePoint(codepoint);
|
||||
if (!/^[A-Za-z]$/.test(key)) return null;
|
||||
|
||||
const upper = key.toUpperCase();
|
||||
const controlCode = upper.charCodeAt(0) & 0x1f;
|
||||
return { ch: String.fromCharCode(controlCode), consumed: kittyMatch[0].length };
|
||||
}
|
||||
|
||||
// xterm modifyOtherKeys: [27;modifiers;codepoint~
|
||||
const xtermMatch = rest.match(/^\[27;(\d+);(\d+)~/);
|
||||
if (!xtermMatch) return null;
|
||||
|
||||
const modifiers = Number.parseInt(xtermMatch[1]!, 10);
|
||||
const codepoint = Number.parseInt(xtermMatch[2]!, 10);
|
||||
const ctrlPressed = ((modifiers - 1) & 4) !== 0;
|
||||
if (!ctrlPressed) return null;
|
||||
|
||||
const key = String.fromCodePoint(codepoint);
|
||||
if (!/^[A-Za-z]$/.test(key)) return null;
|
||||
|
||||
const upper = key.toUpperCase();
|
||||
const controlCode = upper.charCodeAt(0) & 0x1f;
|
||||
return { ch: String.fromCharCode(controlCode), consumed: xtermMatch[0].length };
|
||||
}
|
||||
|
||||
/** Callbacks for parsed input events */
|
||||
export interface InputCallbacks {
|
||||
onPasteStart: () => void;
|
||||
onPasteEnd: () => void;
|
||||
onShiftEnter: () => void;
|
||||
onArrowLeft: () => void;
|
||||
onArrowRight: () => void;
|
||||
onArrowUp: () => void;
|
||||
onArrowDown: () => void;
|
||||
onWordLeft: () => void;
|
||||
onWordRight: () => void;
|
||||
onHome: () => void;
|
||||
onEnd: () => void;
|
||||
onChar: (ch: string) => void;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse raw stdin data into semantic input events.
|
||||
*
|
||||
* Handles paste bracket mode, Kitty keyboard protocol, arrow keys,
|
||||
* Home/End, and Ctrl key combinations. Unknown CSI sequences are skipped.
|
||||
*/
|
||||
export function parseInputData(data: string, callbacks: InputCallbacks): void {
|
||||
let i = 0;
|
||||
while (i < data.length) {
|
||||
const ch = data[i]!;
|
||||
|
||||
if (ch === '\x1B') {
|
||||
const rest = data.slice(i + 1);
|
||||
|
||||
if (rest.startsWith(ESC_PASTE_START)) {
|
||||
callbacks.onPasteStart();
|
||||
i += 1 + ESC_PASTE_START.length;
|
||||
continue;
|
||||
}
|
||||
if (rest.startsWith(ESC_PASTE_END)) {
|
||||
callbacks.onPasteEnd();
|
||||
i += 1 + ESC_PASTE_END.length;
|
||||
continue;
|
||||
}
|
||||
if (rest.startsWith(ESC_SHIFT_ENTER)) {
|
||||
callbacks.onShiftEnter();
|
||||
i += 1 + ESC_SHIFT_ENTER.length;
|
||||
continue;
|
||||
}
|
||||
const ctrlKey = decodeCtrlKey(rest);
|
||||
if (ctrlKey) {
|
||||
callbacks.onChar(ctrlKey.ch);
|
||||
i += 1 + ctrlKey.consumed;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Arrow keys
|
||||
if (rest.startsWith('[D')) {
|
||||
callbacks.onArrowLeft();
|
||||
i += 3;
|
||||
continue;
|
||||
}
|
||||
if (rest.startsWith('[C')) {
|
||||
callbacks.onArrowRight();
|
||||
i += 3;
|
||||
continue;
|
||||
}
|
||||
if (rest.startsWith('[A')) {
|
||||
callbacks.onArrowUp();
|
||||
i += 3;
|
||||
continue;
|
||||
}
|
||||
if (rest.startsWith('[B')) {
|
||||
callbacks.onArrowDown();
|
||||
i += 3;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Option+Arrow (CSI modified): \x1B[1;3D (left), \x1B[1;3C (right)
|
||||
if (rest.startsWith('[1;3D')) {
|
||||
callbacks.onWordLeft();
|
||||
i += 6;
|
||||
continue;
|
||||
}
|
||||
if (rest.startsWith('[1;3C')) {
|
||||
callbacks.onWordRight();
|
||||
i += 6;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Option+Arrow (SS3/alt): \x1Bb (left), \x1Bf (right)
|
||||
if (rest.startsWith('b')) {
|
||||
callbacks.onWordLeft();
|
||||
i += 2;
|
||||
continue;
|
||||
}
|
||||
if (rest.startsWith('f')) {
|
||||
callbacks.onWordRight();
|
||||
i += 2;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Home: \x1B[H (CSI) or \x1BOH (SS3/application mode)
|
||||
if (rest.startsWith('[H') || rest.startsWith('OH')) {
|
||||
callbacks.onHome();
|
||||
i += 3;
|
||||
continue;
|
||||
}
|
||||
|
||||
// End: \x1B[F (CSI) or \x1BOF (SS3/application mode)
|
||||
if (rest.startsWith('[F') || rest.startsWith('OF')) {
|
||||
callbacks.onEnd();
|
||||
i += 3;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Unknown CSI sequences: skip
|
||||
if (rest.startsWith('[')) {
|
||||
const csiMatch = rest.match(/^\[[0-9;]*[A-Za-z~]/);
|
||||
if (csiMatch) {
|
||||
i += 1 + csiMatch[0].length;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
// Unrecognized escape: skip the \x1B
|
||||
i++;
|
||||
continue;
|
||||
}
|
||||
|
||||
callbacks.onChar(ch);
|
||||
i++;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Read multiline input from the user using raw mode with cursor management.
|
||||
*
|
||||
* Supports:
|
||||
* - Enter to submit, Shift+Enter to insert newline
|
||||
* - Paste bracket mode for pasted text with newlines
|
||||
* - Left/Right arrows, Home/End for cursor movement
|
||||
* - Ctrl+A/E (line start/end), Ctrl+K/U (kill line), Ctrl+W (delete word)
|
||||
* - Backspace / Ctrl+H, Ctrl+C / Ctrl+D (cancel)
|
||||
*
|
||||
* Falls back to readline.question() in non-TTY environments.
|
||||
*/
|
||||
export function readMultilineInput(prompt: string): Promise<string | null> {
|
||||
if (!process.stdin.isTTY) {
|
||||
return new Promise((resolve) => {
|
||||
if (process.stdin.readable && !process.stdin.destroyed) {
|
||||
process.stdin.resume();
|
||||
}
|
||||
|
||||
const rl = readline.createInterface({
|
||||
input: process.stdin,
|
||||
output: process.stdout,
|
||||
});
|
||||
|
||||
let answered = false;
|
||||
|
||||
rl.question(prompt, (answer) => {
|
||||
answered = true;
|
||||
rl.close();
|
||||
resolve(answer);
|
||||
});
|
||||
|
||||
rl.on('close', () => {
|
||||
if (!answered) {
|
||||
resolve(null);
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
return new Promise((resolve) => {
|
||||
let buffer = '';
|
||||
let cursorPos = 0;
|
||||
let state: InputState = 'normal';
|
||||
|
||||
const wasRaw = process.stdin.isRaw;
|
||||
process.stdin.setRawMode(true);
|
||||
process.stdin.resume();
|
||||
|
||||
process.stdout.write(PASTE_BRACKET_ENABLE);
|
||||
process.stdout.write(KITTY_KB_ENABLE);
|
||||
process.stdout.write(prompt);
|
||||
|
||||
// --- Buffer position helpers ---
|
||||
|
||||
function getLineStart(): number {
|
||||
const lastNl = buffer.lastIndexOf('\n', cursorPos - 1);
|
||||
return lastNl + 1;
|
||||
}
|
||||
|
||||
function getLineEnd(): number {
|
||||
const nextNl = buffer.indexOf('\n', cursorPos);
|
||||
return nextNl >= 0 ? nextNl : buffer.length;
|
||||
}
|
||||
|
||||
function getLineStartAt(pos: number): number {
|
||||
const lastNl = buffer.lastIndexOf('\n', pos - 1);
|
||||
return lastNl + 1;
|
||||
}
|
||||
|
||||
function getLineEndAt(pos: number): number {
|
||||
const nextNl = buffer.indexOf('\n', pos);
|
||||
return nextNl >= 0 ? nextNl : buffer.length;
|
||||
}
|
||||
|
||||
/** Display width from line start to cursor */
|
||||
function getDisplayColumn(): number {
|
||||
return getDisplayWidth(buffer.slice(getLineStart(), cursorPos));
|
||||
}
|
||||
|
||||
const promptWidth = getDisplayWidth(stripAnsi(prompt));
|
||||
|
||||
/** Terminal column (1-based) for a given buffer position */
|
||||
function getTerminalColumn(pos: number): number {
|
||||
const lineStart = getLineStartAt(pos);
|
||||
const col = getDisplayWidth(buffer.slice(lineStart, pos));
|
||||
const isFirstLine = lineStart === 0;
|
||||
return isFirstLine ? promptWidth + col + 1 : col + 1;
|
||||
}
|
||||
|
||||
/** Find the buffer position in a line that matches a target display column */
|
||||
function findPositionByDisplayColumn(lineStart: number, lineEnd: number, targetDisplayCol: number): number {
|
||||
let displayCol = 0;
|
||||
let pos = lineStart;
|
||||
for (const ch of buffer.slice(lineStart, lineEnd)) {
|
||||
const w = getDisplayWidth(ch);
|
||||
if (displayCol + w > targetDisplayCol) break;
|
||||
displayCol += w;
|
||||
pos += ch.length;
|
||||
}
|
||||
return pos;
|
||||
}
|
||||
|
||||
// --- Terminal output helpers ---
|
||||
|
||||
function rerenderFromCursor(): void {
|
||||
const afterCursor = buffer.slice(cursorPos, getLineEnd());
|
||||
if (afterCursor.length > 0) {
|
||||
process.stdout.write(afterCursor);
|
||||
}
|
||||
process.stdout.write('\x1B[K');
|
||||
const afterWidth = getDisplayWidth(afterCursor);
|
||||
if (afterWidth > 0) {
|
||||
process.stdout.write(`\x1B[${afterWidth}D`);
|
||||
}
|
||||
}
|
||||
|
||||
function cleanup(): void {
|
||||
process.stdin.removeListener('data', onData);
|
||||
process.stdout.write(PASTE_BRACKET_DISABLE);
|
||||
process.stdout.write(KITTY_KB_DISABLE);
|
||||
process.stdin.setRawMode(wasRaw ?? false);
|
||||
process.stdin.pause();
|
||||
}
|
||||
|
||||
// --- Cursor movement ---
|
||||
|
||||
function moveCursorToLineStart(): void {
|
||||
const displayOffset = getDisplayColumn();
|
||||
if (displayOffset > 0) {
|
||||
cursorPos = getLineStart();
|
||||
process.stdout.write(`\x1B[${displayOffset}D`);
|
||||
}
|
||||
}
|
||||
|
||||
function moveCursorToLineEnd(): void {
|
||||
const lineEnd = getLineEnd();
|
||||
const displayOffset = getDisplayWidth(buffer.slice(cursorPos, lineEnd));
|
||||
if (displayOffset > 0) {
|
||||
cursorPos = lineEnd;
|
||||
process.stdout.write(`\x1B[${displayOffset}C`);
|
||||
}
|
||||
}
|
||||
|
||||
// --- Buffer editing ---
|
||||
|
||||
function insertAt(pos: number, text: string): void {
|
||||
buffer = buffer.slice(0, pos) + text + buffer.slice(pos);
|
||||
}
|
||||
|
||||
function deleteRange(start: number, end: number): void {
|
||||
buffer = buffer.slice(0, start) + buffer.slice(end);
|
||||
}
|
||||
|
||||
function insertChar(ch: string): void {
|
||||
insertAt(cursorPos, ch);
|
||||
cursorPos += ch.length;
|
||||
process.stdout.write(ch);
|
||||
if (cursorPos < getLineEnd()) {
|
||||
const afterCursor = buffer.slice(cursorPos, getLineEnd());
|
||||
process.stdout.write(afterCursor);
|
||||
process.stdout.write('\x1B[K');
|
||||
const afterWidth = getDisplayWidth(afterCursor);
|
||||
process.stdout.write(`\x1B[${afterWidth}D`);
|
||||
}
|
||||
}
|
||||
|
||||
function deleteCharBefore(): void {
|
||||
if (cursorPos <= getLineStart()) return;
|
||||
const charWidth = getDisplayWidth(buffer[cursorPos - 1]!);
|
||||
deleteRange(cursorPos - 1, cursorPos);
|
||||
cursorPos--;
|
||||
process.stdout.write(`\x1B[${charWidth}D`);
|
||||
rerenderFromCursor();
|
||||
}
|
||||
|
||||
function deleteToLineEnd(): void {
|
||||
const lineEnd = getLineEnd();
|
||||
if (cursorPos < lineEnd) {
|
||||
deleteRange(cursorPos, lineEnd);
|
||||
process.stdout.write('\x1B[K');
|
||||
}
|
||||
}
|
||||
|
||||
function deleteToLineStart(): void {
|
||||
const lineStart = getLineStart();
|
||||
if (cursorPos > lineStart) {
|
||||
const deletedWidth = getDisplayWidth(buffer.slice(lineStart, cursorPos));
|
||||
deleteRange(lineStart, cursorPos);
|
||||
cursorPos = lineStart;
|
||||
process.stdout.write(`\x1B[${deletedWidth}D`);
|
||||
rerenderFromCursor();
|
||||
}
|
||||
}
|
||||
|
||||
function deleteWord(): void {
|
||||
const lineStart = getLineStart();
|
||||
let end = cursorPos;
|
||||
while (end > lineStart && buffer[end - 1] === ' ') end--;
|
||||
while (end > lineStart && buffer[end - 1] !== ' ') end--;
|
||||
if (end < cursorPos) {
|
||||
const deletedWidth = getDisplayWidth(buffer.slice(end, cursorPos));
|
||||
deleteRange(end, cursorPos);
|
||||
cursorPos = end;
|
||||
process.stdout.write(`\x1B[${deletedWidth}D`);
|
||||
rerenderFromCursor();
|
||||
}
|
||||
}
|
||||
|
||||
function insertNewline(): void {
|
||||
const afterCursorOnLine = buffer.slice(cursorPos, getLineEnd());
|
||||
insertAt(cursorPos, '\n');
|
||||
cursorPos++;
|
||||
process.stdout.write('\x1B[K');
|
||||
process.stdout.write('\n');
|
||||
if (afterCursorOnLine.length > 0) {
|
||||
process.stdout.write(afterCursorOnLine);
|
||||
const afterWidth = getDisplayWidth(afterCursorOnLine);
|
||||
process.stdout.write(`\x1B[${afterWidth}D`);
|
||||
}
|
||||
}
|
||||
|
||||
// --- Input dispatch ---
|
||||
|
||||
const utf8Decoder = new StringDecoder('utf8');
|
||||
|
||||
function onData(data: Buffer): void {
|
||||
try {
|
||||
const str = utf8Decoder.write(data);
|
||||
if (!str) return;
|
||||
|
||||
parseInputData(str, {
|
||||
onPasteStart() { state = 'paste'; },
|
||||
onPasteEnd() {
|
||||
state = 'normal';
|
||||
rerenderFromCursor();
|
||||
},
|
||||
onShiftEnter() { insertNewline(); },
|
||||
onArrowLeft() {
|
||||
if (state !== 'normal') return;
|
||||
if (cursorPos > getLineStart()) {
|
||||
const charWidth = getDisplayWidth(buffer[cursorPos - 1]!);
|
||||
cursorPos--;
|
||||
process.stdout.write(`\x1B[${charWidth}D`);
|
||||
} else if (getLineStart() > 0) {
|
||||
cursorPos = getLineStart() - 1;
|
||||
const col = getTerminalColumn(cursorPos);
|
||||
process.stdout.write('\x1B[A');
|
||||
process.stdout.write(`\x1B[${col}G`);
|
||||
}
|
||||
},
|
||||
onArrowRight() {
|
||||
if (state !== 'normal') return;
|
||||
if (cursorPos < getLineEnd()) {
|
||||
const charWidth = getDisplayWidth(buffer[cursorPos]!);
|
||||
cursorPos++;
|
||||
process.stdout.write(`\x1B[${charWidth}C`);
|
||||
} else if (cursorPos < buffer.length && buffer[cursorPos] === '\n') {
|
||||
cursorPos++;
|
||||
const col = getTerminalColumn(cursorPos);
|
||||
process.stdout.write('\x1B[B');
|
||||
process.stdout.write(`\x1B[${col}G`);
|
||||
}
|
||||
},
|
||||
onArrowUp() {
|
||||
if (state !== 'normal') return;
|
||||
const lineStart = getLineStart();
|
||||
if (lineStart === 0) return;
|
||||
const displayCol = getDisplayColumn();
|
||||
const prevLineStart = getLineStartAt(lineStart - 1);
|
||||
const prevLineEnd = lineStart - 1;
|
||||
cursorPos = findPositionByDisplayColumn(prevLineStart, prevLineEnd, displayCol);
|
||||
const termCol = getTerminalColumn(cursorPos);
|
||||
process.stdout.write('\x1B[A');
|
||||
process.stdout.write(`\x1B[${termCol}G`);
|
||||
},
|
||||
onArrowDown() {
|
||||
if (state !== 'normal') return;
|
||||
const lineEnd = getLineEnd();
|
||||
if (lineEnd >= buffer.length) return;
|
||||
const displayCol = getDisplayColumn();
|
||||
const nextLineStart = lineEnd + 1;
|
||||
const nextLineEnd = getLineEndAt(nextLineStart);
|
||||
cursorPos = findPositionByDisplayColumn(nextLineStart, nextLineEnd, displayCol);
|
||||
const termCol = getTerminalColumn(cursorPos);
|
||||
process.stdout.write('\x1B[B');
|
||||
process.stdout.write(`\x1B[${termCol}G`);
|
||||
},
|
||||
onWordLeft() {
|
||||
if (state !== 'normal') return;
|
||||
const lineStart = getLineStart();
|
||||
if (cursorPos <= lineStart) return;
|
||||
let pos = cursorPos;
|
||||
while (pos > lineStart && buffer[pos - 1] === ' ') pos--;
|
||||
while (pos > lineStart && buffer[pos - 1] !== ' ') pos--;
|
||||
const moveWidth = getDisplayWidth(buffer.slice(pos, cursorPos));
|
||||
cursorPos = pos;
|
||||
process.stdout.write(`\x1B[${moveWidth}D`);
|
||||
},
|
||||
onWordRight() {
|
||||
if (state !== 'normal') return;
|
||||
const lineEnd = getLineEnd();
|
||||
if (cursorPos >= lineEnd) return;
|
||||
let pos = cursorPos;
|
||||
while (pos < lineEnd && buffer[pos] !== ' ') pos++;
|
||||
while (pos < lineEnd && buffer[pos] === ' ') pos++;
|
||||
const moveWidth = getDisplayWidth(buffer.slice(cursorPos, pos));
|
||||
cursorPos = pos;
|
||||
process.stdout.write(`\x1B[${moveWidth}C`);
|
||||
},
|
||||
onHome() {
|
||||
if (state !== 'normal') return;
|
||||
moveCursorToLineStart();
|
||||
},
|
||||
onEnd() {
|
||||
if (state !== 'normal') return;
|
||||
moveCursorToLineEnd();
|
||||
},
|
||||
onChar(ch: string) {
|
||||
if (state === 'paste') {
|
||||
if (ch === '\r' || ch === '\n') {
|
||||
insertAt(cursorPos, '\n');
|
||||
cursorPos++;
|
||||
process.stdout.write('\n');
|
||||
} else {
|
||||
insertAt(cursorPos, ch);
|
||||
cursorPos++;
|
||||
process.stdout.write(ch);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
// Submit
|
||||
if (ch === '\r') {
|
||||
process.stdout.write('\n');
|
||||
cleanup();
|
||||
resolve(buffer);
|
||||
return;
|
||||
}
|
||||
// Cancel
|
||||
if (ch === '\x03' || ch === '\x04') {
|
||||
process.stdout.write('\n');
|
||||
cleanup();
|
||||
resolve(null);
|
||||
return;
|
||||
}
|
||||
// Editing
|
||||
if (ch === '\x7F' || ch === '\x08') { deleteCharBefore(); return; }
|
||||
if (ch === '\x01') { moveCursorToLineStart(); return; }
|
||||
if (ch === '\x05') { moveCursorToLineEnd(); return; }
|
||||
if (ch === '\x0B') { deleteToLineEnd(); return; }
|
||||
if (ch === '\x15') { deleteToLineStart(); return; }
|
||||
if (ch === '\x17') { deleteWord(); return; }
|
||||
// Ignore unknown control characters
|
||||
if (ch.charCodeAt(0) < 0x20) return;
|
||||
// Regular character
|
||||
insertChar(ch);
|
||||
},
|
||||
});
|
||||
} catch {
|
||||
cleanup();
|
||||
resolve(null);
|
||||
}
|
||||
}
|
||||
|
||||
process.stdin.on('data', onData);
|
||||
});
|
||||
}
|
||||
@ -19,7 +19,7 @@ import {
|
||||
buildPrBody,
|
||||
type GitHubIssue,
|
||||
} from '../../infra/github/index.js';
|
||||
import { stageAndCommit } from '../../infra/task/index.js';
|
||||
import { stageAndCommit, getCurrentBranch } from '../../infra/task/index.js';
|
||||
import { executeTask, type TaskExecutionOptions, type PipelineExecutionOptions } from '../tasks/index.js';
|
||||
import { loadGlobalConfig } from '../../infra/config/index.js';
|
||||
import { info, error, success, status, blankLine } from '../../shared/ui/index.js';
|
||||
@ -136,7 +136,9 @@ export async function executePipeline(options: PipelineExecutionOptions): Promis
|
||||
|
||||
// --- Step 2: Create branch (skip if --skip-git) ---
|
||||
let branch: string | undefined;
|
||||
let baseBranch: string | undefined;
|
||||
if (!skipGit) {
|
||||
baseBranch = getCurrentBranch(cwd);
|
||||
branch = options.branch ?? generatePipelineBranchName(pipelineConfig, options.issueNumber);
|
||||
info(`Creating branch: ${branch}`);
|
||||
try {
|
||||
@ -206,6 +208,7 @@ export async function executePipeline(options: PipelineExecutionOptions): Promis
|
||||
branch,
|
||||
title: prTitle,
|
||||
body: prBody,
|
||||
base: baseBranch,
|
||||
repo: options.repo,
|
||||
});
|
||||
|
||||
|
||||
@ -11,7 +11,7 @@ import { stringify as stringifyYaml } from 'yaml';
|
||||
import { promptInput, confirm } from '../../../shared/prompt/index.js';
|
||||
import { success, info, error } from '../../../shared/ui/index.js';
|
||||
import { summarizeTaskName, type TaskFileData } from '../../../infra/task/index.js';
|
||||
import { getPieceDescription } from '../../../infra/config/index.js';
|
||||
import { getPieceDescription, loadGlobalConfig } from '../../../infra/config/index.js';
|
||||
import { determinePiece } from '../execute/selectAndExecute.js';
|
||||
import { createLogger, getErrorMessage } from '../../../shared/utils/index.js';
|
||||
import { isIssueReference, resolveIssueTask, parseIssueNumbers, createIssue } from '../../../infra/github/index.js';
|
||||
@ -87,19 +87,52 @@ export function createIssueFromTask(task: string): void {
|
||||
}
|
||||
}
|
||||
|
||||
interface WorktreeSettings {
|
||||
worktree?: boolean | string;
|
||||
branch?: string;
|
||||
autoPr?: boolean;
|
||||
}
|
||||
|
||||
async function promptWorktreeSettings(): Promise<WorktreeSettings> {
|
||||
const useWorktree = await confirm('Create worktree?', true);
|
||||
if (!useWorktree) {
|
||||
return {};
|
||||
}
|
||||
|
||||
const customPath = await promptInput('Worktree path (Enter for auto)');
|
||||
const worktree: boolean | string = customPath || true;
|
||||
|
||||
const customBranch = await promptInput('Branch name (Enter for auto)');
|
||||
const branch = customBranch || undefined;
|
||||
|
||||
const autoPr = await confirm('Auto-create PR?', true);
|
||||
|
||||
return { worktree, branch, autoPr };
|
||||
}
|
||||
|
||||
/**
|
||||
* Save a task from interactive mode result.
|
||||
* Does not prompt for worktree/branch settings.
|
||||
* Prompts for worktree/branch/auto_pr settings before saving.
|
||||
*/
|
||||
export async function saveTaskFromInteractive(
|
||||
cwd: string,
|
||||
task: string,
|
||||
piece?: string,
|
||||
): Promise<void> {
|
||||
const filePath = await saveTaskFile(cwd, task, { piece });
|
||||
const settings = await promptWorktreeSettings();
|
||||
const filePath = await saveTaskFile(cwd, task, { piece, ...settings });
|
||||
const filename = path.basename(filePath);
|
||||
success(`Task created: ${filename}`);
|
||||
info(` Path: ${filePath}`);
|
||||
if (settings.worktree) {
|
||||
info(` Worktree: ${typeof settings.worktree === 'string' ? settings.worktree : 'auto'}`);
|
||||
}
|
||||
if (settings.branch) {
|
||||
info(` Branch: ${settings.branch}`);
|
||||
}
|
||||
if (settings.autoPr) {
|
||||
info(` Auto-PR: yes`);
|
||||
}
|
||||
if (piece) info(` Piece: ${piece}`);
|
||||
}
|
||||
|
||||
@ -151,7 +184,9 @@ export async function addTask(cwd: string, task?: string): Promise<void> {
|
||||
}
|
||||
piece = pieceId;
|
||||
|
||||
const pieceContext = getPieceDescription(pieceId, cwd);
|
||||
const globalConfig = loadGlobalConfig();
|
||||
const previewCount = globalConfig.interactivePreviewMovements;
|
||||
const pieceContext = getPieceDescription(pieceId, cwd, previewCount);
|
||||
|
||||
// Interactive mode: AI conversation to refine task
|
||||
const result = await interactiveMode(cwd, undefined, pieceContext);
|
||||
@ -171,43 +206,25 @@ export async function addTask(cwd: string, task?: string): Promise<void> {
|
||||
}
|
||||
|
||||
// 3. ワークツリー/ブランチ/PR設定
|
||||
let worktree: boolean | string | undefined;
|
||||
let branch: string | undefined;
|
||||
let autoPr: boolean | undefined;
|
||||
|
||||
const useWorktree = await confirm('Create worktree?', true);
|
||||
if (useWorktree) {
|
||||
const customPath = await promptInput('Worktree path (Enter for auto)');
|
||||
worktree = customPath || true;
|
||||
|
||||
const customBranch = await promptInput('Branch name (Enter for auto)');
|
||||
if (customBranch) {
|
||||
branch = customBranch;
|
||||
}
|
||||
|
||||
// PR確認(worktreeが有効な場合のみ)
|
||||
autoPr = await confirm('Auto-create PR?', true);
|
||||
}
|
||||
const settings = await promptWorktreeSettings();
|
||||
|
||||
// YAMLファイル作成
|
||||
const filePath = await saveTaskFile(cwd, taskContent, {
|
||||
piece,
|
||||
issue: issueNumber,
|
||||
worktree,
|
||||
branch,
|
||||
autoPr,
|
||||
...settings,
|
||||
});
|
||||
|
||||
const filename = path.basename(filePath);
|
||||
success(`Task created: ${filename}`);
|
||||
info(` Path: ${filePath}`);
|
||||
if (worktree) {
|
||||
info(` Worktree: ${typeof worktree === 'string' ? worktree : 'auto'}`);
|
||||
if (settings.worktree) {
|
||||
info(` Worktree: ${typeof settings.worktree === 'string' ? settings.worktree : 'auto'}`);
|
||||
}
|
||||
if (branch) {
|
||||
info(` Branch: ${branch}`);
|
||||
if (settings.branch) {
|
||||
info(` Branch: ${settings.branch}`);
|
||||
}
|
||||
if (autoPr) {
|
||||
if (settings.autoPr) {
|
||||
info(` Auto-PR: yes`);
|
||||
}
|
||||
if (piece) {
|
||||
|
||||
114
src/features/tasks/execute/parallelExecution.ts
Normal file
114
src/features/tasks/execute/parallelExecution.ts
Normal file
@ -0,0 +1,114 @@
|
||||
/**
|
||||
* Worker pool task execution strategy.
|
||||
*
|
||||
* Runs tasks using a fixed-size worker pool. Each worker picks up the next
|
||||
* available task as soon as it finishes the current one, maximizing slot
|
||||
* utilization. Works for both sequential (concurrency=1) and parallel
|
||||
* (concurrency>1) execution through the same code path.
|
||||
*/
|
||||
|
||||
import type { TaskRunner, TaskInfo } from '../../../infra/task/index.js';
|
||||
import { info, blankLine } from '../../../shared/ui/index.js';
|
||||
import { executeAndCompleteTask } from './taskExecution.js';
|
||||
import { installSigIntHandler } from './sigintHandler.js';
|
||||
import type { TaskExecutionOptions } from './types.js';
|
||||
|
||||
export interface WorkerPoolResult {
|
||||
success: number;
|
||||
fail: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Run tasks using a worker pool with the given concurrency.
|
||||
*
|
||||
* Algorithm:
|
||||
* 1. Create a shared AbortController
|
||||
* 2. Maintain a queue of pending tasks and a set of active promises
|
||||
* 3. Fill available slots from the queue
|
||||
* 4. Wait for any active task to complete (Promise.race)
|
||||
* 5. Record result, fill freed slot from queue
|
||||
* 6. Repeat until queue is empty and all active tasks complete
|
||||
*/
|
||||
export async function runWithWorkerPool(
|
||||
taskRunner: TaskRunner,
|
||||
initialTasks: TaskInfo[],
|
||||
concurrency: number,
|
||||
cwd: string,
|
||||
pieceName: string,
|
||||
options?: TaskExecutionOptions,
|
||||
): Promise<WorkerPoolResult> {
|
||||
const abortController = new AbortController();
|
||||
const { cleanup } = installSigIntHandler(() => abortController.abort());
|
||||
|
||||
let successCount = 0;
|
||||
let failCount = 0;
|
||||
|
||||
const queue = [...initialTasks];
|
||||
const active = new Map<Promise<boolean>, TaskInfo>();
|
||||
|
||||
try {
|
||||
while (queue.length > 0 || active.size > 0) {
|
||||
if (abortController.signal.aborted) {
|
||||
break;
|
||||
}
|
||||
|
||||
fillSlots(queue, active, concurrency, taskRunner, cwd, pieceName, options, abortController);
|
||||
|
||||
if (active.size === 0) {
|
||||
break;
|
||||
}
|
||||
|
||||
const settled = await Promise.race(
|
||||
[...active.keys()].map((p) => p.then(
|
||||
(result) => ({ promise: p, result }),
|
||||
() => ({ promise: p, result: false }),
|
||||
)),
|
||||
);
|
||||
|
||||
const task = active.get(settled.promise);
|
||||
active.delete(settled.promise);
|
||||
|
||||
if (task) {
|
||||
if (settled.result) {
|
||||
successCount++;
|
||||
} else {
|
||||
failCount++;
|
||||
}
|
||||
}
|
||||
|
||||
if (!abortController.signal.aborted && queue.length === 0) {
|
||||
const nextTasks = taskRunner.claimNextTasks(concurrency - active.size);
|
||||
queue.push(...nextTasks);
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
cleanup();
|
||||
}
|
||||
|
||||
return { success: successCount, fail: failCount };
|
||||
}
|
||||
|
||||
function fillSlots(
|
||||
queue: TaskInfo[],
|
||||
active: Map<Promise<boolean>, TaskInfo>,
|
||||
concurrency: number,
|
||||
taskRunner: TaskRunner,
|
||||
cwd: string,
|
||||
pieceName: string,
|
||||
options: TaskExecutionOptions | undefined,
|
||||
abortController: AbortController,
|
||||
): void {
|
||||
while (active.size < concurrency && queue.length > 0) {
|
||||
const task = queue.shift()!;
|
||||
const isParallel = concurrency > 1;
|
||||
|
||||
blankLine();
|
||||
info(`=== Task: ${task.name} ===`);
|
||||
|
||||
const promise = executeAndCompleteTask(task, taskRunner, cwd, pieceName, options, {
|
||||
abortSignal: isParallel ? abortController.signal : undefined,
|
||||
taskPrefix: isParallel ? task.name : undefined,
|
||||
});
|
||||
active.set(promise, task);
|
||||
}
|
||||
}
|
||||
@ -51,13 +51,14 @@ import {
|
||||
notifySuccess,
|
||||
notifyError,
|
||||
preventSleep,
|
||||
playWarningSound,
|
||||
isDebugEnabled,
|
||||
writePromptLog,
|
||||
} from '../../../shared/utils/index.js';
|
||||
import type { PromptLogRecord } from '../../../shared/utils/index.js';
|
||||
import { selectOption, promptInput } from '../../../shared/prompt/index.js';
|
||||
import { EXIT_SIGINT } from '../../../shared/exitCodes.js';
|
||||
import { getLabel } from '../../../shared/i18n/index.js';
|
||||
import { installSigIntHandler } from './sigintHandler.js';
|
||||
|
||||
const log = createLogger('piece');
|
||||
|
||||
@ -150,6 +151,7 @@ export async function executePiece(
|
||||
// Load saved agent sessions for continuity (from project root or clone-specific storage)
|
||||
const isWorktree = cwd !== projectCwd;
|
||||
const globalConfig = loadGlobalConfig();
|
||||
const shouldNotify = globalConfig.notificationSound !== false;
|
||||
const currentProvider = globalConfig.provider ?? 'claude';
|
||||
|
||||
// Prevent macOS idle sleep if configured
|
||||
@ -187,6 +189,10 @@ export async function executePiece(
|
||||
);
|
||||
info(getLabel('piece.iterationLimit.currentMovement', undefined, { currentMovement: request.currentMovement }));
|
||||
|
||||
if (shouldNotify) {
|
||||
playWarningSound();
|
||||
}
|
||||
|
||||
const action = await selectOption(getLabel('piece.iterationLimit.continueQuestion'), [
|
||||
{
|
||||
label: getLabel('piece.iterationLimit.continueLabel'),
|
||||
@ -316,8 +322,10 @@ export async function executePiece(
|
||||
const movementIndex = pieceConfig.movements.findIndex((m) => m.name === step.name);
|
||||
const totalMovements = pieceConfig.movements.length;
|
||||
|
||||
// Use quiet mode from CLI (already resolved CLI flag + config in preAction)
|
||||
displayRef.current = new StreamDisplay(step.personaDisplayName, isQuietMode(), {
|
||||
const quiet = isQuietMode();
|
||||
const prefix = options.taskPrefix;
|
||||
const agentLabel = prefix ? `${prefix}:${step.personaDisplayName}` : step.personaDisplayName;
|
||||
displayRef.current = new StreamDisplay(agentLabel, quiet, {
|
||||
iteration,
|
||||
maxIterations: pieceConfig.maxIterations,
|
||||
movementIndex: movementIndex >= 0 ? movementIndex : 0,
|
||||
@ -439,7 +447,9 @@ export async function executePiece(
|
||||
|
||||
success(`Piece completed (${state.iteration} iterations${elapsedDisplay})`);
|
||||
info(`Session log: ${ndjsonLogPath}`);
|
||||
if (shouldNotify) {
|
||||
notifySuccess('TAKT', getLabel('piece.notifyComplete', undefined, { iteration: String(state.iteration) }));
|
||||
}
|
||||
});
|
||||
|
||||
engine.on('piece:abort', (state, reason) => {
|
||||
@ -484,7 +494,9 @@ export async function executePiece(
|
||||
|
||||
error(`Piece aborted after ${state.iteration} iterations${elapsedDisplay}: ${reason}`);
|
||||
info(`Session log: ${ndjsonLogPath}`);
|
||||
if (shouldNotify) {
|
||||
notifyError('TAKT', getLabel('piece.notifyAbort', undefined, { reason }));
|
||||
}
|
||||
});
|
||||
|
||||
// Suppress EPIPE errors from SDK child process stdin after interrupt.
|
||||
@ -496,23 +508,25 @@ export async function executePiece(
|
||||
throw err;
|
||||
};
|
||||
|
||||
// SIGINT handler: 1st Ctrl+C = graceful abort, 2nd = force exit
|
||||
let sigintCount = 0;
|
||||
const onSigInt = () => {
|
||||
sigintCount++;
|
||||
if (sigintCount === 1) {
|
||||
blankLine();
|
||||
warn(getLabel('piece.sigintGraceful'));
|
||||
const abortEngine = () => {
|
||||
process.on('uncaughtException', onEpipe);
|
||||
interruptAllQueries();
|
||||
engine.abort();
|
||||
} else {
|
||||
blankLine();
|
||||
error(getLabel('piece.sigintForce'));
|
||||
process.exit(EXIT_SIGINT);
|
||||
}
|
||||
};
|
||||
process.on('SIGINT', onSigInt);
|
||||
|
||||
// SIGINT handling: when abortSignal is provided (parallel mode), delegate to caller
|
||||
const useExternalAbort = Boolean(options.abortSignal);
|
||||
|
||||
let onAbortSignal: (() => void) | undefined;
|
||||
let sigintCleanup: (() => void) | undefined;
|
||||
|
||||
if (useExternalAbort) {
|
||||
onAbortSignal = abortEngine;
|
||||
options.abortSignal!.addEventListener('abort', onAbortSignal, { once: true });
|
||||
} else {
|
||||
const handler = installSigIntHandler(abortEngine);
|
||||
sigintCleanup = handler.cleanup;
|
||||
}
|
||||
|
||||
try {
|
||||
const finalState = await engine.run();
|
||||
@ -522,7 +536,10 @@ export async function executePiece(
|
||||
reason: abortReason,
|
||||
};
|
||||
} finally {
|
||||
process.removeListener('SIGINT', onSigInt);
|
||||
sigintCleanup?.();
|
||||
if (onAbortSignal && options.abortSignal) {
|
||||
options.abortSignal.removeEventListener('abort', onAbortSignal);
|
||||
}
|
||||
process.removeListener('uncaughtException', onEpipe);
|
||||
}
|
||||
}
|
||||
|
||||
73
src/features/tasks/execute/resolveTask.ts
Normal file
73
src/features/tasks/execute/resolveTask.ts
Normal file
@ -0,0 +1,73 @@
|
||||
/**
|
||||
* Resolve execution directory and piece from task data.
|
||||
*/
|
||||
|
||||
import { loadGlobalConfig } from '../../../infra/config/index.js';
|
||||
import { type TaskInfo, createSharedClone, summarizeTaskName, getCurrentBranch } from '../../../infra/task/index.js';
|
||||
import { info } from '../../../shared/ui/index.js';
|
||||
|
||||
export interface ResolvedTaskExecution {
|
||||
execCwd: string;
|
||||
execPiece: string;
|
||||
isWorktree: boolean;
|
||||
branch?: string;
|
||||
baseBranch?: string;
|
||||
startMovement?: string;
|
||||
retryNote?: string;
|
||||
autoPr?: boolean;
|
||||
issueNumber?: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve execution directory and piece from task data.
|
||||
* If the task has worktree settings, create a shared clone and use it as cwd.
|
||||
* Task name is summarized to English by AI for use in branch/clone names.
|
||||
*/
|
||||
export async function resolveTaskExecution(
|
||||
task: TaskInfo,
|
||||
defaultCwd: string,
|
||||
defaultPiece: string,
|
||||
): Promise<ResolvedTaskExecution> {
|
||||
const data = task.data;
|
||||
|
||||
if (!data) {
|
||||
return { execCwd: defaultCwd, execPiece: defaultPiece, isWorktree: false };
|
||||
}
|
||||
|
||||
let execCwd = defaultCwd;
|
||||
let isWorktree = false;
|
||||
let branch: string | undefined;
|
||||
let baseBranch: string | undefined;
|
||||
|
||||
if (data.worktree) {
|
||||
baseBranch = getCurrentBranch(defaultCwd);
|
||||
info('Generating branch name...');
|
||||
const taskSlug = await summarizeTaskName(task.content, { cwd: defaultCwd });
|
||||
|
||||
info('Creating clone...');
|
||||
const result = createSharedClone(defaultCwd, {
|
||||
worktree: data.worktree,
|
||||
branch: data.branch,
|
||||
taskSlug,
|
||||
issueNumber: data.issue,
|
||||
});
|
||||
execCwd = result.path;
|
||||
branch = result.branch;
|
||||
isWorktree = true;
|
||||
info(`Clone created: ${result.path} (branch: ${result.branch})`);
|
||||
}
|
||||
|
||||
const execPiece = data.piece || defaultPiece;
|
||||
const startMovement = data.start_movement;
|
||||
const retryNote = data.retry_note;
|
||||
|
||||
let autoPr: boolean | undefined;
|
||||
if (data.auto_pr !== undefined) {
|
||||
autoPr = data.auto_pr;
|
||||
} else {
|
||||
const globalConfig = loadGlobalConfig();
|
||||
autoPr = globalConfig.autoPr;
|
||||
}
|
||||
|
||||
return { execCwd, execPiece, isWorktree, branch, baseBranch, startMovement, retryNote, autoPr, issueNumber: data.issue };
|
||||
}
|
||||
@ -17,7 +17,7 @@ import {
|
||||
loadGlobalConfig,
|
||||
} from '../../../infra/config/index.js';
|
||||
import { confirm } from '../../../shared/prompt/index.js';
|
||||
import { createSharedClone, autoCommitAndPush, summarizeTaskName } from '../../../infra/task/index.js';
|
||||
import { createSharedClone, autoCommitAndPush, summarizeTaskName, getCurrentBranch } from '../../../infra/task/index.js';
|
||||
import { DEFAULT_PIECE_NAME } from '../../../shared/constants.js';
|
||||
import { info, error, success } from '../../../shared/ui/index.js';
|
||||
import { createLogger } from '../../../shared/utils/index.js';
|
||||
@ -111,6 +111,8 @@ export async function confirmAndCreateWorktree(
|
||||
return { execCwd: cwd, isWorktree: false };
|
||||
}
|
||||
|
||||
const baseBranch = getCurrentBranch(cwd);
|
||||
|
||||
info('Generating branch name...');
|
||||
const taskSlug = await summarizeTaskName(task, { cwd });
|
||||
|
||||
@ -121,7 +123,7 @@ export async function confirmAndCreateWorktree(
|
||||
});
|
||||
info(`Clone created: ${result.path} (branch: ${result.branch})`);
|
||||
|
||||
return { execCwd: result.path, isWorktree: true, branch: result.branch };
|
||||
return { execCwd: result.path, isWorktree: true, branch: result.branch, baseBranch };
|
||||
}
|
||||
|
||||
/**
|
||||
@ -161,7 +163,7 @@ export async function selectAndExecuteTask(
|
||||
return;
|
||||
}
|
||||
|
||||
const { execCwd, isWorktree, branch } = await confirmAndCreateWorktree(
|
||||
const { execCwd, isWorktree, branch, baseBranch } = await confirmAndCreateWorktree(
|
||||
cwd,
|
||||
task,
|
||||
options?.createWorktree,
|
||||
@ -206,6 +208,7 @@ export async function selectAndExecuteTask(
|
||||
branch,
|
||||
title: task.length > 100 ? `${task.slice(0, 97)}...` : task,
|
||||
body: prBody,
|
||||
base: baseBranch,
|
||||
repo: options?.repo,
|
||||
});
|
||||
if (prResult.success) {
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user