Compare commits

...

5 Commits

Author SHA1 Message Date
2ba87c8f6e 📝 docs(project): 添加开源社区标准文档与 CI 工作流
Some checks failed
CI / Lint (ruff) (push) Has been cancelled
CI / Import Check (push) Has been cancelled
- 新增 GitHub Issue 模板(Bug 报告、功能请求)和 Pull Request 模板
- 新增 Code of Conduct(贡献者行为准则)和 Security Policy(安全政策)
- 新增 CI 工作流(GitHub Actions),包含 ruff 代码检查和导入验证
- 新增开发依赖文件 requirements-dev.txt

📦 build(ci): 配置 GitHub Actions 持续集成

- 在 push 到 main 分支和 pull request 时自动触发 CI
- 添加 lint 任务执行 ruff 代码风格检查
- 添加 import-check 任务验证核心服务模块导入

♻️ refactor(structure): 重构项目目录结构

- 将根目录的 6 个服务模块迁移至 services/ 包
- 更新所有相关文件的导入语句(main.py、ui/、services/)
- 根目录仅保留 main.py 作为唯一 Python 入口文件

🔧 chore(config): 调整配置和资源文件路径

- 将 config.json 移至 config/ 目录,更新相关引用
- 将个人头像图片移至 assets/faces/ 目录,更新 .gitignore
- 更新 Dockerfile 和 docker-compose.yml 中的配置路径

📝 docs(readme): 完善 README 文档

- 添加项目状态徽章(Python 版本、License、CI)
- 更新项目结构图反映实际目录布局
- 修正使用指南中的 Tab 名称和操作路径
- 替换 your-username 占位符为格式提示

🗑️ chore(cleanup): 清理冗余文件

- 删除旧版备份文件、测试脚本、临时记录和运行日志
- 删除散落的个人图片文件(已归档至 assets/faces/)
2026-02-27 22:12:39 +08:00
b5deafa2cc feat(config): 更新模型配置与LLM提示词指南
- 将默认LLM模型从gemini-2.0-flash升级为gemini-3-flash-preview
- 将博主人设从"性感福利主播"更改为"二次元coser"
- 优化LLM生成SD提示词的指南,新增中国审美人物描述规则
- 为各SD模型添加颜值核心词、示范prompt和禁止使用的关键词
- 新增三维人物描述法(眼睛/肤色/气质)和专属光线词指导

📦 build(openspec): 归档旧规范并创建新规范

- 将improve-maintainability规范归档至2026-02-25目录
- 新增2026-02-26-improve-ui-layout规范,包含UI布局优化设计
- 新增2026-02-26-optimize-image-generation规范,包含图片生成优化设计
- 在根目录openspec/specs下新增图片质量、后处理、中国审美和LLM提示词规范

♻️ refactor(sd_service): 优化SD模型配置和图片后处理

- 为各SD模型添加中国审美特征词和欧美面孔排除词
- 新增高画质预设档,SDXL模型启用Hires Fix参数
- 将后处理拆分为beauty_enhance和anti_detect_postprocess两个独立函数
- 新增美化增强功能,支持通过enhance_level参数控制强度

♻️ refactor(services): 更新内容生成服务以支持美化增强

- 在generate_images函数中新增enhance_level参数
- 将美化强度参数传递至SDService.txt2img调用

♻️ refactor(ui): 优化UI布局和添加美化强度控件

- 注入自定义CSS主题层,优化字体、按钮和卡片样式
- 将全局设置迁移至独立的"⚙️ 配置"Tab,优化Tab顺序
- 在内容创作Tab的高级设置中添加美化强度滑块控件
- 优化自动运营Tab布局,改为2列卡片网格展示
2026-02-26 22:58:05 +08:00
b635108b89 refactor: split monolithic main.py into services/ + ui/ modules (improve-maintainability)
- main.py: 4360 → 146 lines (96.6% reduction), entry layer only
- services/: rate_limiter, autostart, persona, connection, profile,
  hotspot, content, engagement, scheduler, queue_ops (10 business modules)
- ui/app.py: all Gradio UI code extracted into build_app(cfg, analytics)
- Fix: with gr.Blocks() indented inside build_app function
- Fix: cfg.all property (not get_all method)
- Fix: STATUS_LABELS, get_persona_keywords, fetch_proactive_notes imports
- Fix: queue_ops module-level set_publish_callback moved into configure()
- Fix: pub_queue.format_*() wrapped as queue_format_table/calendar helpers
- All 14 files syntax-verified, build_app() runtime-verified
- 58/58 tasks complete"
2026-02-24 22:50:56 +08:00
d88b4e9a3b ♻️ refactor(config): 实现配置安全存储与原子写
- 新增 `get_secure()` 和 `set_secure()` 方法,优先从环境变量或系统 keyring 读取敏感配置,`config.json` 中仅存储占位符
- 将 `save()` 方法改为使用临时文件 + `os.replace()` 的原子写入,防止进程中断导致配置文件损坏
- 在 `add_llm_provider()` 和 `get_active_llm()` 中集成安全配置读写,自动迁移旧版明文 API Key

♻️ refactor(analytics): 实现分析数据原子写

- 将 `_save_analytics()` 和 `_save_weights()` 方法改为使用临时文件 + `os.replace()` 的原子写入
- 确保在写入过程中进程被终止时,原始数据文件保持完整

♻️ refactor(main): 增强发布功能健壮性与代码模块化

- 在 `publish_to_xhs()` 中增加发布前输入校验【标题长度、图片数量、文件存在性】并在 `finally` 块中自动清理本次生成的临时图片文件
- 为全局笔记列表缓存 `_cached_proactive_entries` 和 `_cached_my_note_entries` 引入 `threading.RLock` 保护,新增 `_set_cache()` 和 `_get_cache()` 线程安全操作函数
- 将「内容创作」Tab 的 UI 构建代码拆分至 `ui/tab_create.py` 模块,主文件通过 `build_tab()` 函数调用并组装
- 将 Gradio 应用的 CSS 和主题配置提取为模块级变量,提升可维护性

📦 build(deps): 新增 keyring 依赖

- 在 `requirements.txt` 中添加 `keyring>=24.0.0` 以支持系统凭证管理

📝 docs(openspec): 新增生产就绪审计文档

- 在 `openspec/changes/archive/2026-02-24-production-readiness-audit/` 下新增设计文档、提案、任务清单及各功能规格说明
- 将核心功能规格同步至 `openspec/specs/` 目录
2026-02-24 21:53:36 +08:00
4cde2f7c67 feat(config): 新增全局设置自动保存功能
- 新增图片生成参数自动保存【quality_mode、sd_steps、sd_cfg_scale、sd_negative_prompt】
- 新增自动运营调度参数自动保存【sched_comment_on、sched_like_on、sched_fav_on、sched_reply_on、sched_publish_on】
- 新增智能学习参数自动保存【learn_interval】
- 新增内容排期参数自动保存【queue_gen_count】
- 优化人设切换逻辑,同时保存到配置并更新队列主题池
- 新增页面加载时自动恢复全局设置功能

📝 docs(config): 更新配置管理文档

- 在config_manager.py中新增默认配置项
- 在main.py中实现启动时自动加载全局设置
- 更新配置保存测试脚本_test_config_save.py

📦 build(ui): 优化用户界面交互

- 图片生成参数变更时自动保存到配置
- 自动运营参数变更时自动保存到配置
- 智能学习参数变更时自动保存到配置
- 内容排期参数变更时自动保存到配置
- 修复人设切换时队列主题池未更新的问题

🐛 fix(queue): 修复发布队列图片显示问题

- 在publish_queue.py中新增图片预览功能
- 支持将图片转换为base64编码嵌入markdown显示
- 显示图片文件大小和状态信息
2026-02-24 21:04:33 +08:00
145 changed files with 11930 additions and 4587 deletions

37
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,37 @@
---
name: Bug 报告
about: 报告一个问题,帮助我们改进
title: "[BUG] "
labels: bug
assignees: ''
---
## 问题描述
请简洁清晰地描述遇到的问题。
## 复现步骤
1. 进入 '...'
2. 点击 '...'
3. 滚动至 '...'
4. 出现错误
## 预期行为
描述你预期应该发生什么。
## 实际行为
描述实际发生了什么,请附上错误截图或日志信息。
## 环境信息
- **操作系统**:(例如 Windows 11 / macOS 14 / Ubuntu 22.04
- **Python 版本**:(执行 `python --version` 获取)
- **autobot 版本/提交**:(执行 `git rev-parse --short HEAD` 获取)
- **相关依赖版本**:(如 gradio、openai 等)
## 附加信息
任何其他与问题相关的上下文、截图或日志文件,请粘贴于此。

View File

@ -0,0 +1,23 @@
---
name: 功能请求
about: 为本项目提出新点子或改进建议
title: "[FEATURE] "
labels: enhancement
assignees: ''
---
## 背景与需求
请描述你目前遇到的问题或痛点。例如:"当我想要 [...] 时,总是感到不便,因为 [...]"
## 期望的解决方案
请清晰描述你希望实现的功能或行为。
## 替代方案
你是否考虑过其他解决方案或变通方法?请描述。
## 附加信息
你可以在此添加任何截图、参考链接或其他有助于理解该功能请求的上下文信息。

149
.github/prompts/opsx-apply.prompt.md vendored Normal file
View File

@ -0,0 +1,149 @@
---
description: Implement tasks from an OpenSpec change (Experimental)
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]``- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! You can archive this change with `/opsx:archive`.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

154
.github/prompts/opsx-archive.prompt.md vendored Normal file
View File

@ -0,0 +1,154 @@
---
description: Archive a completed change in the experimental workflow
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Prompt user for confirmation to continue
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Prompt user for confirmation to continue
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute `/opsx:sync` logic. Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Spec sync status (synced / sync skipped / no delta specs)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs
All artifacts complete. All tasks complete.
```
**Output On Success (No Delta Specs)**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** No delta specs
All artifacts complete. All tasks complete.
```
**Output On Success With Warnings**
```
## Archive Complete (with warnings)
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** Sync skipped (user chose to skip)
**Warnings:**
- Archived with 2 incomplete artifacts
- Archived with 3 incomplete tasks
- Delta spec sync was skipped (user chose to skip)
Review the archive if this was not intentional.
```
**Output On Error (Archive Exists)**
```
## Archive Failed
**Change:** <change-name>
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
Target archive directory already exists.
**Options:**
1. Rename the existing archive
2. Delete the existing archive if it's a duplicate
3. Wait until a different date to archive
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use /opsx:sync approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@ -0,0 +1,239 @@
---
description: Archive multiple completed changes at once
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

111
.github/prompts/opsx-continue.prompt.md vendored Normal file
View File

@ -0,0 +1,111 @@
---
description: Continue working on a change - create the next artifact (Experimental)
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name after `/opsx:continue` (e.g., `/opsx:continue add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change with `/opsx:apply` or archive it with `/opsx:archive`."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Run `/opsx:continue` to create the next artifact"
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

171
.github/prompts/opsx-explore.prompt.md vendored Normal file
View File

@ -0,0 +1,171 @@
---
description: Enter explore mode - think through ideas, investigate problems, clarify requirements
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change)
- A comparison: "postgres vs sqlite for this"
- Nothing (just enter explore mode)
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
If the user mentioned a specific change name, read its artifacts for context.
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? `/opsx:new` or `/opsx:ff`"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

91
.github/prompts/opsx-ff.prompt.md vendored Normal file
View File

@ -0,0 +1,91 @@
---
description: Create a change and generate all artifacts needed for implementation in one go
---
Fast-forward through artifact creation - generate everything needed to start implementation.
**Input**: The argument after `/opsx:ff` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` to start implementing."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use the `template` as a starting point, filling in based on context
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

66
.github/prompts/opsx-new.prompt.md vendored Normal file
View File

@ -0,0 +1,66 @@
---
description: Start a new change using the experimental artifact workflow (OPSX)
---
Start a new change using the experimental artifact-driven approach.
**Input**: The argument after `/opsx:new` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema. Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Run `/opsx:continue` or just describe what this change is about and I'll draft it."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest using `/opsx:continue` instead
- Pass --schema if using a non-default workflow

522
.github/prompts/opsx-onboard.prompt.md vendored Normal file
View File

@ -0,0 +1,522 @@
---
description: Guided onboarding - walk through a complete OpenSpec workflow cycle with narration
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

131
.github/prompts/opsx-sync.prompt.md vendored Normal file
View File

@ -0,0 +1,131 @@
---
description: Sync delta specs from a change to main specs
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name after `/opsx:sync` (e.g., `/opsx:sync add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

161
.github/prompts/opsx-verify.prompt.md vendored Normal file
View File

@ -0,0 +1,161 @@
---
description: Verify implementation matches change artifacts before archiving
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name after `/opsx:verify` (e.g., `/opsx:verify add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

35
.github/pull_request_template.md vendored Normal file
View File

@ -0,0 +1,35 @@
## 变更类型
请勾选适用的选项(在 `[ ]` 中填入 `x`
- [ ] 🐛 Bug Fix修复问题未破坏现有功能
- [ ] ✨ Feature新功能未破坏现有功能
- [ ] 📝 Docs仅文档变更
- [ ] ♻️ Refactor代码重构未修复 Bug 也未新增功能)
- [ ] 🎨 Style格式化、缩进等不影响代码逻辑
- [ ] ⚡ Performance性能优化
- [ ] 🔧 Chore构建流程、工具配置等杂项变更
## 变更描述
请简洁描述本 PR 做了什么,以及为什么需要这些变更。
## 相关 Issue
关闭 #(填入 Issue 编号,若无可删除此行)
## 测试说明
请描述你如何测试了本次变更(手动测试步骤、自动化测试等):
- [ ] 我在本地运行了 `python main.py` 确认应用正常启动
- [ ] 我验证了受影响的功能仍按预期工作
- [ ] 我运行了 `ruff check .` 确认无新增 lint 错误
## 截图(如适用)
如果本 PR 包含 UI 变更,请附上前后对比截图。
## 备注
有任何需要 reviewer 特别关注的地方,请在此说明。

View File

@ -0,0 +1,156 @@
---
name: openspec-apply-change
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]``- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@ -0,0 +1,114 @@
---
name: openspec-archive-change
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute /opsx:sync logic (use the openspec-sync-specs skill). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Whether specs were synced (if applicable)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
All artifacts complete. All tasks complete.
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use openspec-sync-specs approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@ -0,0 +1,246 @@
---
name: openspec-bulk-archive-change
description: Archive multiple completed changes at once. Use when archiving several parallel changes.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@ -0,0 +1,118 @@
---
name: openspec-continue-change
description: Continue working on an OpenSpec change by creating the next artifact. Use when the user wants to progress their change, create the next artifact, or continue their workflow.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Want to continue? Just ask me to continue or tell me what to do next."
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

290
.github/skills/openspec-explore/SKILL.md vendored Normal file
View File

@ -0,0 +1,290 @@
---
name: openspec-explore
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Handling Different Entry Points
**User brings a vague idea:**
```
User: I'm thinking about adding real-time collaboration
You: Real-time collab is a big space. Let me think about this...
COLLABORATION SPECTRUM
════════════════════════════════════════════
Awareness Coordination Sync
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Presence│ │Cursors │ │ CRDT │
│ "3 │ │ Multi │ │Conflict│
│online" │ │ select │ │ free │
└────────┘ └────────┘ └────────┘
│ │ │
trivial moderate complex
Where's your head at?
```
**User brings a specific problem:**
```
User: The auth system is a mess
You: [reads codebase]
┌─────────────────────────────────────────────┐
│ CURRENT AUTH FLOW │
└─────────────────────────────────────────────┘
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Google │ │ GitHub │ │ Email │
│ OAuth │ │ OAuth │ │ Magic │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
┌───────────┐
│ Session │
└─────┬─────┘
┌───────────┐
│ Perms │
└───────────┘
I see three tangles. Which one's burning?
```
**User is stuck mid-implementation:**
```
User: /opsx:explore add-auth-system
The OAuth integration is more complex than expected
You: [reads change artifacts]
You're on task 4: "Implement OAuth flow"
Let me trace what's involved...
[draws diagram, explores options, suggests paths]
Want to update the design to reflect this?
Or add a spike task to investigate?
```
**User wants to compare options:**
```
User: Should we use Postgres or SQLite?
You: Generic answer is boring. What's the context?
User: A CLI tool that tracks local dev environments
You: That changes everything.
┌─────────────────────────────────────────────────┐
│ CLI TOOL DATA STORAGE │
└─────────────────────────────────────────────────┘
Key constraints:
• No daemon running
• Must work offline
• Single user
SQLite Postgres
Deployment embedded ✓ needs server ✗
Offline yes ✓ no ✗
Single file yes ✓ no ✗
SQLite. Not even close.
Unless... is there a sync component?
```
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? /opsx:new or /opsx:ff"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When it feels like things are crystallizing, you might summarize:
```
## What We Figured Out
**The problem**: [crystallized understanding]
**The approach**: [if one emerged]
**Open questions**: [if any remain]
**Next steps** (if ready):
- Create a change: /opsx:new <name>
- Fast-forward to tasks: /opsx:ff <name>
- Keep exploring: just keep talking
```
But this summary is optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@ -0,0 +1,101 @@
---
name: openspec-ff-change
description: Fast-forward through OpenSpec artifact creation. Use when the user wants to quickly create all artifacts needed for implementation without stepping through each one individually.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Fast-forward through artifact creation - generate everything needed to start implementation in one go.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, suggest continuing that change instead
- Verify each artifact file exists after writing before proceeding to next

View File

@ -0,0 +1,74 @@
---
name: openspec-new-change
description: Start a new OpenSpec change using the experimental artifact workflow. Use when the user wants to create a new feature, fix, or modification with a structured step-by-step approach.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Start a new change using the experimental artifact-driven approach.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema (e.g., `proposal` for spec-driven).
Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Just describe what this change is about and I'll draft it, or ask me to continue."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest continuing that change instead
- Pass --schema if using a non-default workflow

529
.github/skills/openspec-onboard/SKILL.md vendored Normal file
View File

@ -0,0 +1,529 @@
---
name: openspec-onboard
description: Guided onboarding for OpenSpec - walk through a complete workflow cycle with narration and real codebase work.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@ -0,0 +1,138 @@
---
name: openspec-sync-specs
description: Sync delta specs from a change to main specs. Use when the user wants to update main specs with changes from a delta spec, without archiving the change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@ -0,0 +1,168 @@
---
name: openspec-verify-change
description: Verify implementation matches change artifacts. Use when the user wants to validate that implementation is complete, correct, and coherent before archiving.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

48
.github/workflows/ci.yml vendored Normal file
View File

@ -0,0 +1,48 @@
name: CI
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
lint:
name: Lint (ruff)
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Install ruff
run: pip install ruff>=0.4.0
- name: Run ruff
run: ruff check . --select E,F,W --ignore E501,E402,W291,W293
import-check:
name: Import Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Install dependencies
run: pip install -r requirements.txt
- name: Verify core service imports
run: |
python -c "from services.config_manager import ConfigManager; print('config_manager OK')"
python -c "from services.llm_service import LLMService; print('llm_service OK')"
python -c "from services.mcp_client import get_mcp_client; print('mcp_client OK')"
python -c "from services.analytics_service import AnalyticsService; print('analytics_service OK')"
python -c "from services.publish_queue import STATUS_LABELS; print('publish_queue OK')"
python -c "from services.sd_service import SDService; print('sd_service OK')"

9
.gitignore vendored
View File

@ -16,7 +16,7 @@ venv/
env/ env/
# ========== 敏感配置 ========== # ========== 敏感配置 ==========
config.json config/config.json
cookies.json cookies.json
*.cookie *.cookie
@ -43,3 +43,10 @@ config copy.json
# ========== 临时文件 ========== # ========== 临时文件 ==========
*.tmp *.tmp
*.bak *.bak
# ========== 个人媒体资产(隐私,不入版本控制) ==========
assets/faces/
# ========== 自动生成的启动脚本(含机器绝对路径) ==========
scripts/_autostart.bat
scripts/_autostart.vbs

83
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,83 @@
# 贡献者行为准则
## 我们的承诺
作为成员、贡献者和领导者,我们承诺让每一位参与者都能在无骚扰的环境中参与我们的社区,不论其年龄、体型、是否有可见或不可见的残疾、族裔、性别特征、性别认同与表达、经验水平、教育程度、社会经济地位、国籍、外表、种族、种姓、肤色、宗教信仰或性取向。
我们承诺以有助于建立开放、友好、多元、包容和健康社区的方式行事和互动。
## 我们的准则
有助于为我们的社区创造积极环境的行为示例包括:
* 对他人表现出同理心和善意
* 尊重不同的意见、观点和经历
* 给予并优雅地接受建设性反馈
* 承担责任并向受我们错误影响的人道歉,并从经历中学习
* 关注整个社区的最大利益,不只是我们个人的利益
不可接受的行为示例包括:
* 使用性化语言或图像,以及任何形式的性关注或性骚扰
* 发表挑衅性、侮辱性或贬义的评论,以及针对个人或政治的攻击
* 公开或私下骚扰
* 未经明确许可,发布他人的私人信息,例如实际地址或电子邮件地址
* 在专业环境中其他可被合理认为不适当的行为
## 执行责任
社区领导者有责任阐明和执行我们可接受行为的准则,并在遇到任何他们认为不适当、有威胁、冒犯或有害的行为时采取适当且公平的纠正行动。
社区领导者有权利和责任删除、编辑或拒绝与本行为准则不符的评论、提交、代码、wiki 编辑、议题及其他贡献,并在适当时就审核决定的原因进行沟通。
## 适用范围
本行为准则适用于所有社区空间,也适用于当个人在公共空间中正式代表本社区时的情形。代表我们社区的示例包括:使用官方电子邮件地址、通过官方社交媒体账号发帖,或在线上或线下活动中担任指定代表。
## 执行
如遇到骚扰、滥用或其他不可接受的行为,可通过 GitHub Issues 或在本仓库的 Discussions 中向社区维护者报告。所有投诉都将被迅速、公平地审查和调查。
所有社区领导者有义务尊重任何事件报告者的隐私和安全。
## 执行指南
社区领导者将遵循以下社区影响指南,确定对违反本行为准则的任何行为的后果:
### 1. 纠正
**社区影响**:使用不当语言或其他被认为不专业或社区不欢迎的行为。
**后果**:社区领导者会发出私人书面警告,说明违规的性质并解释为何该行为不当。可能会要求公开道歉。
### 2. 警告
**社区影响**:单次事件或一系列行为的违规。
**后果**:警告并说明持续行为的后果。在指定时间内不得与相关人员互动,包括主动与执行本行为准则的人员互动。这包括避免在社区空间以及社交媒体等外部渠道进行互动。违反这些条款可能会导致临时或永久封禁。
### 3. 临时封禁
**社区影响**:严重违反社区准则,包括持续的不当行为。
**后果**:在指定时间内临时禁止与社区进行任何形式的互动或公开通信。在此期间,不允许与相关人员进行任何公开或私下的互动,包括主动与执行本行为准则的人员互动。违反这些条款可能会导致永久封禁。
### 4. 永久封禁
**社区影响**:表现出违反社区准则的模式,包括持续的不当行为、对某个人的骚扰或对某类人群的攻击或诋毁。
**后果**:永久禁止在社区内进行任何形式的公开互动。
## 归属
本行为准则改编自 [Contributor Covenant][homepage] v2.1 版,详情请访问 [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1]。
社区影响指南的灵感来自 [Mozilla 的行为准则执行阶梯][Mozilla CoC]。
有关本行为准则常见问题的解答,请参阅 [https://www.contributor-covenant.org/faq][FAQ]。其他语言的翻译请参阅 [https://www.contributor-covenant.org/translations][translations]。
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations

View File

@ -28,12 +28,13 @@ RUN apt-get update && \
COPY --from=builder /install /usr/local COPY --from=builder /install /usr/local
# 复制项目代码 # 复制项目代码
COPY config_manager.py llm_service.py sd_service.py mcp_client.py main.py ./ COPY main.py requirements.txt ./
COPY requirements.txt ./ COPY services/ services/
COPY config.example.json ./ COPY ui/ ui/
COPY config/config.example.json config/config.example.json
# 创建工作目录 # 创建工作目录
RUN mkdir -p xhs_workspace RUN mkdir -p xhs_workspace config logs
# Gradio 默认端口 # Gradio 默认端口
EXPOSE 7860 EXPOSE 7860

View File

@ -13,6 +13,11 @@
<a href="#常见问题">FAQ</a> <a href="#常见问题">FAQ</a>
<a href="#贡献指南">贡献</a> <a href="#贡献指南">贡献</a>
</p> </p>
<p align="center">
<img src="https://img.shields.io/badge/python-3.10+-blue" alt="Python">
<a href="LICENSE"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT"></a>
<img src="https://github.com/<your-github-username>/xhs-autobot/actions/workflows/ci.yml/badge.svg" alt="CI">
</p>
</p> </p>
--- ---
@ -91,7 +96,7 @@
```bash ```bash
# 1. 克隆项目 # 1. 克隆项目
git clone https://github.com/your-username/xhs-autobot.git git clone https://github.com/<your-github-username>/xhs-autobot.git
cd xhs-autobot cd xhs-autobot
# 2. 创建虚拟环境(推荐) # 2. 创建虚拟环境(推荐)
@ -105,8 +110,8 @@ source .venv/bin/activate
pip install -r requirements.txt pip install -r requirements.txt
# 4. 复制配置文件并填写你的 API Key # 4. 复制配置文件并填写你的 API Key
cp config.example.json config.json cp config/config.example.json config/config.json
# 编辑 config.json填写 api_key、base_url 等 # 编辑 config/config.json填写 api_key、base_url 等
# 5. 启动! # 5. 启动!
python main.py python main.py
@ -120,12 +125,12 @@ python main.py
```bash ```bash
# 1. 克隆项目 # 1. 克隆项目
git clone https://github.com/your-username/xhs-autobot.git git clone https://github.com/<your-github-username>/xhs-autobot.git
cd xhs-autobot cd xhs-autobot
# 2. 准备配置文件 # 2. 准备配置文件
cp config.example.json config.json cp config/config.example.json config/config.json
# 编辑 config.json填写 api_key、base_url 等 # 编辑 config/config.json填写 api_key、base_url 等
# ⚠️ mcp_url 改为容器网络地址: # ⚠️ mcp_url 改为容器网络地址:
# "mcp_url": "http://xhs-mcp:18060/mcp" # "mcp_url": "http://xhs-mcp:18060/mcp"
@ -164,7 +169,7 @@ docker compose exec xhs-autobot bash
#### 启用 Stable Diffusion需要 NVIDIA GPU #### 启用 Stable Diffusion需要 NVIDIA GPU
编辑 `docker-compose.yml`,取消 `sd-webui` 部分的注释,并将 `config.json` 中的 `sd_url` 改为: 编辑 `docker-compose.yml`,取消 `sd-webui` 部分的注释,并将 `config/config.json` 中的 `sd_url` 改为:
```json ```json
"sd_url": "http://sd-webui:7860" "sd_url": "http://sd-webui:7860"
@ -205,11 +210,11 @@ python launch.py --api
### 首次使用流程 ### 首次使用流程
1. **配置 LLM**展开「⚙️ 全局设置」,添加 LLM 提供商API Key + Base URL点击「连接 LLM」 1. **配置 LLM**切换到「⚙️ 配置」 Tab,添加 LLM 提供商API Key + Base URL点击「连接 LLM」
2. **连接 SD**(可选)— 填写 SD WebUI URL点击「连接 SD」 2. **连接 SD**(可选)— 在「⚙️ 配置」 Tab 填写 SD WebUI URL点击「连接 SD」
3. **检查 MCP** — 点击「检查 MCP」确认小红书服务正常 3. **检查 MCP** — 点击「检查 MCP」确认小红书服务正常
4. **登录小红书** — 切换到「🔐 账号登录」 Tab扫码登录 4. **登录小红书** — 切换到「🔐 账号登录」 Tab扫码登录
5. **选择人设** — 在人设下拉框选择博主人设(影响文案风格 + 图片视觉) 5. **选择人设** — 在「⚙️ 配置」 Tab 的人设下拉框选择博主人设(影响文案风格 + 图片视觉)
6. **开始创作** — 切换到「✨ 内容创作」Tab输入主题一键生成 6. **开始创作** — 切换到「✨ 内容创作」Tab输入主题一键生成
### 自动化运营 ### 自动化运营
@ -243,7 +248,7 @@ python launch.py --api
## ⚙️ 配置说明 ## ⚙️ 配置说明
配置文件 `config.json` 会在运行时自动创建和保存。首次使用请从 `config.example.json` 复制: 配置文件 `config/config.json` 会在运行时自动创建和保存。首次使用请从 `config/config.example.json` 复制:
```json ```json
{ {
@ -280,24 +285,47 @@ python launch.py --api
``` ```
xhs-autobot/ xhs-autobot/
├── main.py # 主程序入口 (Gradio UI + 业务逻辑 + 8 个 Tab) ├── main.py # 主程序入口 (Gradio UI + 事件绑定)
├── config_manager.py # 配置管理模块 (单例、自动保存) ├── config/ # 配置文件目录
├── llm_service.py # LLM 服务封装 (文案生成、热点分析、评论回复、SD Prompt 指南) │ ├── config.json # 运行时配置 (gitignore)
├── sd_service.py # Stable Diffusion 服务封装 (3 模型适配、9 人设视觉方案) │ └── config.example.json # 配置模板
├── mcp_client.py # 小红书 MCP 客户端 (搜索、发布、评论、点赞) ├── logs/ # 运行日志 (gitignore)
├── analytics_service.py # 笔记数据分析 & 权重学习服务 │ └── autobot.log
├── publish_queue.py # 内容排期队列 (SQLite + 后台 Publisher) ├── docs/ # 参考文档
├── config.json # 运行时配置 (gitignore) │ └── mcp.md # xiaohongshu-mcp 参考文档
├── config.example.json # 配置模板 ├── scripts/ # 运行时生成的启动脚本 (gitignore)
│ ├── _autostart.bat
│ └── _autostart.vbs
├── requirements.txt # Python 依赖 ├── requirements.txt # Python 依赖
├── requirements-dev.txt # 开发工具依赖 (ruff 等)
├── Dockerfile # Docker 镜像构建 ├── Dockerfile # Docker 镜像构建
├── docker-compose.yml # Docker Compose 编排 ├── docker-compose.yml # Docker Compose 编排
├── .dockerignore # Docker 构建排除规则 ├── services/ # 业务逻辑层
├── xhs_workspace/ # 导出的文案和图片 (gitignore) │ ├── config_manager.py # 配置管理模块 (单例、自动保存)
│ ├── publish_queue.db # 排期队列数据库 │ ├── llm_service.py # LLM 服务封装 (文案生成、热点分析等)
│ ├── analytics_data.json # 笔记表现数据 │ ├── sd_service.py # Stable Diffusion 服务封装 (3 模型适配、9 人设视觉方案)
│ └── content_weights.json # 内容权重数据 │ ├── mcp_client.py # 小红书 MCP 客户端 (搜索、发布、评论、点赞)
└── autobot.log # 运行日志 (gitignore) │ ├── analytics_service.py # 笔记数据分析 & 权重学习服务
│ ├── publish_queue.py # 内容排期队列 (SQLite + 后台 Publisher)
│ ├── scheduler.py # 自动化运营调度器
│ ├── content.py # 文案生成、图片生成、导出、发布
│ ├── hotspot.py # 热点探测、热点生成、笔记缓存
│ ├── engagement.py # 评论管家:手动评论、回复、互动
│ ├── profile.py # 小红书账号 Profile 解析与可视化
│ ├── persona.py # 人设管理:常量、关键词池、主题池
│ ├── connection.py # LLM/SD/MCP/XHS 连接管理
│ ├── queue_ops.py # 发布队列操作
│ ├── rate_limiter.py # 频率控制、每日限额、冷却管理
│ └── autostart.py # Windows 开机自启管理
├── ui/ # Gradio UI 层
│ ├── app.py # 主界面构建函数(全部 Tab UI + 事件绑定)
│ └── tab_create.py # Tab 1「✨ 内容创作」组件定义
├── assets/
│ └── faces/ # 头像文件目录 (gitignore)
└── xhs_workspace/ # 导出的文案和图片 (gitignore)
├── publish_queue.db # 排期队列数据库
├── analytics_data.json # 笔记表现数据
└── content_weights.json # 内容权重数据
``` ```
--- ---
@ -371,7 +399,7 @@ xhs-autobot/
<details> <details>
<summary><b>Q: 如何添加自定义人设?</b></summary> <summary><b>Q: 如何添加自定义人设?</b></summary>
在人设下拉框中直接输入自定义人设描述即可(支持自由输入)。如需配套主题池和关键词,需在 `main.py` 的 `PERSONA_POOL_MAP` 中添加对应条目。如需配套 SD 视觉方案,需在 `sd_service.py` 的 `PERSONA_SD_PROFILES` 中添加。 在人设下拉框中直接输入自定义人设描述即可(支持自由输入)。如需配套主题池和关键词,需在 `services/persona.py` 的 `PERSONA_POOL_MAP` 中添加对应条目。如需配套 SD 视觉方案,需在 `services/sd_service.py` 的 `PERSONA_SD_PROFILES` 中添加。
</details> </details>
<details> <details>

36
SECURITY.md Normal file
View File

@ -0,0 +1,36 @@
# 安全政策
## 支持的版本
目前我们对以下版本提供安全更新支持:
| 版本 | 支持状态 |
| ---- | -------- |
| main 分支(最新提交) | ✅ 支持 |
| 旧版本 / 存档提交 | ❌ 不支持 |
## 报告安全漏洞
**请勿通过公开的 GitHub Issues 上报安全漏洞。**
如果您发现了安全漏洞,请通过以下方式私下联系我们:
1. **GitHub Security Advisory推荐**:访问本仓库的 [Security → Advisories](../../security/advisories/new) 页面点击「Report a vulnerability」提交私密报告。
2. **私信维护者**:如无法使用上述方式,可通过 GitHub 私信联系仓库维护者。
### 报告内容请包含
- 漏洞的详细描述
- 影响范围(受影响的版本、功能模块)
- 复现步骤(如可能)
- 潜在影响评估
- 您建议的修复方案(如有)
## 响应承诺
- 我们会在 **7 个工作日内** 确认收到您的报告
- 我们会在 **30 天内** 评估漏洞并提供修复时间表
- 修复发布后,我们会在 Release Notes 中公开致谢(如您同意)
感谢您帮助保持本项目的安全!

View File

@ -1 +0,0 @@
目前的脚本已经实现了从 “灵感 -> 文案 -> 绘图 -> 发布” 的核心闭环作为一个个人辅助工具MVP最小可行性产品已经非常出色了。但是如果要作为一个专业的运营工具或者满足商业化需求目前的版本还存在明显的短板。我将从 内容质量、运营闭环、账号安全、功能深度 四个维度为你进行全面分析,并给出升级建议。📊 当前功能评分表维度当前得分评价核心流程⭐⭐⭐⭐⭐流程跑通无需在多个软件间切换效率极大提升。内容质量⭐⭐⭐LLM 文案通用性强但个性不足SD 绘图仅支持基础生图,缺乏精细控制。运营功能⭐⭐仅支持“发”,缺乏“看”(数据分析)和“回”(评论互动)。多媒体能力⭐⭐仅支持图片,不支持视频(尽管 MCP 支持)。稳定性⭐⭐⭐依赖本地环境和 Cookie 有效期,缺乏异常重试和账号管理。🔍 深度差距分析与改进建议1. 视觉能力的局限性 (痛点:图片不可控)目前使用的是基础的 txt2img文生图。问题很难控制人物姿势、保持角色一致性比如同一个博主IP、或者在特定背景中植入产品。缺口ControlNet 支持无法指定姿势Openpose或线稿上色Canny。LoRA 切换:无法快速切换画风(如:二次元 vs 真实感 vs 胶片风。Img2Img无法基于参考图进行修改。💡 改进建议:在 UI 中增加 ControlNet 参数接口,或者增加“风格预设”下拉框(后台自动切换 LoRA。2. 缺乏“选题与热点”辅助 (痛点:不知道写什么)目前主要依赖用户自己输入“主题”。问题如果用户不知道最近什么火写的文章可能没人看。缺口MCP 搜索能力未利用xiaohongshu-mcp 有 search_feeds 功能,但脚本里没用。💡 改进建议:增加 “热点探测” tab。逻辑用户输入关键词 -> 调用 MCP 搜索 -> LLM 分析热门笔记的标题和结构 -> 生成“爆款模仿”方案。3. 缺失视频支持 (痛点:视频流量更大)小红书目前对视频流量扶持很大。问题:目前的 UI 和逻辑只支持图片。虽然 MCP 支持 publish_with_video但你没接。💡 改进建议UI 增加“上传视频”或“AI 生成视频”入口。接入 Runway / Luma API 或本地 AnimateDiff 生成几秒的动态视频。4. 运营互动的缺失 (痛点:发完不管)运营小红书,“养号”和“回复” 与发帖一样重要。问题:目前是“射后不理”。缺口:评论管理:无法自动回复评论,无法引导私域。数据反馈:发出去的笔记有多少阅读?脚本里看不到。💡 改进建议:增加 “评论管家” 模块:定期调用 MCP 获取新评论 -> LLM 生成回复 -> 调用 MCP 回复。增加 “数据看板”:调用 user_profile 展示昨日点赞涨粉数。5. 账号矩阵与安全性 (痛点:单点风险)问题:目前是单账号模式。缺口:多账号切换:如果我有 5 个号,需要反复手动扫码或替换 Cookie 文件。定时发布只能“立即发布”。真正运营需要设定在晚高峰18:00-21:00自动发。💡 改进建议:引入简单的 SQLite 数据库或 JSON 文件管理多组 Cookie。引入 APScheduler 库,实现“存入草稿箱,特定时间自动调用 MCP 发布”。🛠️ 下一步升级路线图 (Roadmap)如果你想把这个脚本升级为V2.0 专业版,建议按以下顺序添加功能:第一阶段:补全 MCP 能力 (低成本,高回报)接入搜索功能:在写文案前,先让 AI 看 5 篇同类热门笔记。接入数据面板:在侧边栏显示当前账号粉丝数、获赞数。第二阶段:增强视觉 (提升内容力)SD 进阶:支持上传参考图 (img2img)。本地图库:有时候不想用 AI 图,想混排自己拍的照片,增加“本地上传”按钮。第三阶段:自动化运营 (解放双手)自动回复机器人:根据设定的人设(知性姐姐/毒舌博主)自动回评论。定时任务:设置一个队列,让它自己跑。

View File

@ -1,3 +0,0 @@
@echo off
cd /d "F:\3_Personal\AI\xhs_bot\autobot"
"F:\3_Personal\AI\xhs_bot\autobot\.venv\Scripts\pythonw.exe" "F:\3_Personal\AI\xhs_bot\autobot\main.py"

View File

@ -1,3 +0,0 @@
Set WshShell = CreateObject("WScript.Shell")
WshShell.Run chr(34) & "F:\3_Personal\AI\xhs_bot\autobot\_autostart.bat" & chr(34), 0
Set WshShell = Nothing

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.5 MiB

View File

@ -1,10 +0,0 @@
{
"api_key": "sk-d212b926f51f4f0f9297629cd2ab77b4",
"base_url": "https://api.deepseek.com/v1",
"sd_url": "http://127.0.0.1:7860",
"mcp_url": "http://localhost:18060/mcp",
"model": "deepseek-reasoner",
"persona": "温柔知性的时尚博主",
"auto_reply_enabled": false,
"schedule_enabled": false
}

View File

@ -1,26 +0,0 @@
{
"api_key": "sk-NPZECL5m3BmZv0S9YO9KOd179pepRH08iYeAn1Tk07Jux9Br",
"base_url": "https://wolfai.top/v1",
"sd_url": "http://127.0.0.1:7861",
"mcp_url": "http://localhost:18060/mcp",
"model": "gemini-3-flash-preview",
"persona": "温柔知性的时尚博主",
"auto_reply_enabled": false,
"schedule_enabled": false,
"my_user_id": "69872540000000002303cc42",
"active_llm": "wolfai",
"llm_providers": [
{
"name": "默认",
"api_key": "sk-d212b926f51f4f0f9297629cd2ab77b4",
"base_url": "https://api.deepseek.com/v1"
},
{
"name": "wolfai",
"api_key": "sk-NPZECL5m3BmZv0S9YO9KOd179pepRH08iYeAn1Tk07Jux9Br",
"base_url": "https://wolfai.top/v1"
}
],
"use_smart_weights": false,
"xsec_token": "AB1StlX7ffxsEkfyNuTFDesPlV2g1haPcYuh1-AkYcQxo="
}

View File

@ -11,8 +11,8 @@ services:
ports: ports:
- "7860:7860" - "7860:7860"
volumes: volumes:
# 配置文件挂载(首次请从 config.example.json 复制并填写) # 配置文件挂载(首次请从 config/config.example.json 复制并填写)
- ./config.json:/app/config.json - ./config/config.json:/app/config/config.json
# 工作目录(导出的文案 & 图片) # 工作目录(导出的文案 & 图片)
- ./xhs_workspace:/app/xhs_workspace - ./xhs_workspace:/app/xhs_workspace
environment: environment:

View File

4223
main.py

File diff suppressed because it is too large Load Diff

View File

@ -1,264 +0,0 @@
import gradio as gr
import requests
import json
import base64
import io
import os
import time
import re
import shutil
import platform
import subprocess
from PIL import Image
# ================= 0. 基础配置与工具 =================
# 强制不走代理连接本地 SD
os.environ['NO_PROXY'] = '127.0.0.1,localhost'
CONFIG_FILE = "config.json"
OUTPUT_DIR = "xhs_workspace"
os.makedirs(OUTPUT_DIR, exist_ok=True)
class ConfigManager:
@staticmethod
def load():
if os.path.exists(CONFIG_FILE):
try:
with open(CONFIG_FILE, 'r', encoding='utf-8') as f:
return json.load(f)
except:
pass
return {
"api_key": "",
"base_url": "https://api.openai.com/v1",
"sd_url": "http://127.0.0.1:7860",
"model": "gpt-3.5-turbo"
}
@staticmethod
def save(config_data):
with open(CONFIG_FILE, 'w', encoding='utf-8') as f:
json.dump(config_data, f, indent=4, ensure_ascii=False)
# ================= 1. 核心逻辑功能 =================
def get_llm_models(api_key, base_url):
if not api_key or not base_url:
return gr.update(choices=[]), "⚠️ 请先填写配置"
try:
url = f"{base_url.rstrip('/')}/models"
headers = {"Authorization": f"Bearer {api_key}"}
response = requests.get(url, headers=headers, timeout=10)
if response.status_code == 200:
data = response.json()
models = [item['id'] for item in data.get('data', [])]
# 保存配置
cfg = ConfigManager.load()
cfg['api_key'] = api_key
cfg['base_url'] = base_url
ConfigManager.save(cfg)
# 修复警告:允许自定义值
return gr.update(choices=models, value=models[0] if models else None), f"✅ 已连接,加载 {len(models)} 个模型"
return gr.update(), f"❌ 连接失败: {response.status_code}"
except Exception as e:
return gr.update(), f"❌ 错误: {e}"
def generate_copy(api_key, base_url, model, topic, style):
if not api_key: return "", "", "", "❌ 缺 API Key"
# --- 核心修改:优化了 Prompt增加字数和违禁词限制 ---
system_prompt = """
你是一个小红书爆款内容专家请根据用户主题生成内容
标题规则(严格执行)
1. 长度限制必须控制在 18 字以内含Emoji绝对不能超过 20
2. 格式要求Emoji + 爆点关键词 + 核心痛点
3. 禁忌禁止使用第一顶级等绝对化广告法违禁词
4. 风格二极管标题震惊/后悔/必看/避雷/哭了具有强烈的点击欲望
正文规则
1. 口语化多用Emoji分段清晰不堆砌长句
2. 结尾必须有 5 个以上相关话题标签(#)。
绘图 Prompt
生成对应的 Stable Diffusion 英文提示词强调masterpiece, best quality, 8k, soft lighting, ins style
返回 JSON 格式
{"title": "...", "content": "...", "sd_prompt": "..."}
"""
try:
headers = {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"}
payload = {
"model": model,
"messages": [
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"主题:{topic}\n风格:{style}"}
],
"response_format": {"type": "json_object"}
}
resp = requests.post(f"{base_url.rstrip('/')}/chat/completions", headers=headers, json=payload, timeout=60)
content = resp.json()['choices'][0]['message']['content']
content = re.sub(r'```json\s*|```', '', content).strip()
data = json.loads(content)
# --- 双重保险Python 强制截断 ---
title = data.get('title', '')
# 如果 LLM 不听话超过了20字强制截断并保留前19个字+省略号或者直接保留前20个
if len(title) > 20:
title = title[:20]
return title, data.get('content', ''), data.get('sd_prompt', ''), "✅ 文案生成完毕"
except Exception as e:
return "", "", "", f"❌ 生成失败: {e}"
def get_sd_models(sd_url):
try:
resp = requests.get(f"{sd_url}/sdapi/v1/sd-models", timeout=3)
if resp.status_code == 200:
models = [m['title'] for m in resp.json()]
return gr.update(choices=models, value=models[0] if models else None), "✅ SD 已连接"
return gr.update(choices=[]), "❌ SD 连接失败"
except:
return gr.update(choices=[]), "❌ SD 未启动或端口错误"
def generate_images(sd_url, prompt, neg_prompt, model, steps, cfg):
if not model: return None, "❌ 未选择模型"
# 切换模型
try:
requests.post(f"{sd_url}/sdapi/v1/options", json={"sd_model_checkpoint": model})
except:
pass # 忽略切换错误,继续尝试生成
payload = {
"prompt": prompt,
"negative_prompt": neg_prompt,
"steps": steps,
"cfg_scale": cfg,
"width": 768,
"height": 1024,
"batch_size": 2
}
try:
resp = requests.post(f"{sd_url}/sdapi/v1/txt2img", json=payload, timeout=120)
images = []
for i in resp.json()['images']:
img = Image.open(io.BytesIO(base64.b64decode(i)))
images.append(img)
return images, "✅ 图片生成完毕"
except Exception as e:
return None, f"❌ 绘图失败: {e}"
def one_click_export(title, content, images):
if not title: return "❌ 无法导出:没有标题"
safe_title = re.sub(r'[\\/*?:"<>|]', "", title)[:20]
folder_name = f"{int(time.time())}_{safe_title}"
folder_path = os.path.join(OUTPUT_DIR, folder_name)
os.makedirs(folder_path, exist_ok=True)
with open(os.path.join(folder_path, "文案.txt"), "w", encoding="utf-8") as f:
f.write(f"{title}\n\n{content}")
if images:
for idx, img in enumerate(images):
img.save(os.path.join(folder_path, f"{idx+1}.png"))
try:
if platform.system() == "Windows":
os.startfile(folder_path)
elif platform.system() == "Darwin":
subprocess.call(["open", folder_path])
else:
subprocess.call(["xdg-open", folder_path])
return f"✅ 已导出至: {folder_path}"
except:
return f"✅ 已导出: {folder_path}"
# ================= 2. UI 界面构建 =================
cfg = ConfigManager.load()
with gr.Blocks(title="小红书全自动工作台", theme=gr.themes.Soft()) as app:
gr.Markdown("## 🍒 小红书 AI 爆文生产工坊")
state_images = gr.State([])
with gr.Row():
with gr.Column(scale=1):
with gr.Accordion("⚙️ 系统设置 (自动保存)", open=True):
api_key = gr.Textbox(label="LLM API Key", value=cfg['api_key'], type="password")
base_url = gr.Textbox(label="Base URL", value=cfg['base_url'])
sd_url = gr.Textbox(label="SD URL", value=cfg['sd_url'])
with gr.Row():
btn_connect = gr.Button("🔗 连接并获取模型", size="sm")
btn_refresh_sd = gr.Button("🔄 刷新 SD", size="sm")
# 修复点 1允许自定义值防止报错
llm_model = gr.Dropdown(label="选择 LLM 模型", value=cfg['model'], allow_custom_value=True, interactive=True)
sd_model = gr.Dropdown(label="选择 SD 模型", allow_custom_value=True, interactive=True)
status_bar = gr.Markdown("等待就绪...")
gr.Markdown("### 💡 内容构思")
topic = gr.Textbox(label="笔记主题", placeholder="例如:优衣库早春穿搭")
style = gr.Dropdown(["好物种草", "干货教程", "情绪共鸣", "生活Vlog"], label="风格", value="好物种草")
btn_step1 = gr.Button("✨ 第一步:生成文案方案", variant="primary")
with gr.Column(scale=1):
gr.Markdown("### 📝 文案确认")
# 修复点 2去掉了 show_copy_button 参数,兼容旧版 Gradio
res_title = gr.Textbox(label="标题 (AI生成)", interactive=True)
res_content = gr.TextArea(label="正文 (AI生成)", lines=10, interactive=True)
res_prompt = gr.TextArea(label="绘图提示词", lines=4, interactive=True)
with gr.Accordion("🎨 绘图参数", open=False):
neg_prompt = gr.Textbox(label="反向词", value="nsfw, lowres, bad anatomy, text, error")
steps = gr.Slider(15, 50, value=25, label="步数")
cfg_scale = gr.Slider(1, 15, value=7, label="相关性 (CFG)")
btn_step2 = gr.Button("🎨 第二步:开始绘图", variant="primary")
with gr.Column(scale=1):
gr.Markdown("### 🖼️ 视觉结果")
gallery = gr.Gallery(label="生成预览", columns=1, height="auto")
btn_export = gr.Button("📂 一键导出 (文案+图片)", variant="stop")
export_msg = gr.Markdown("")
# ================= 3. 事件绑定 =================
btn_connect.click(fn=get_llm_models, inputs=[api_key, base_url], outputs=[llm_model, status_bar])
btn_refresh_sd.click(fn=get_sd_models, inputs=[sd_url], outputs=[sd_model, status_bar])
btn_step1.click(
fn=generate_copy,
inputs=[api_key, base_url, llm_model, topic, style],
outputs=[res_title, res_content, res_prompt, status_bar]
)
def on_img_gen(sd_url, p, np, m, s, c):
imgs, msg = generate_images(sd_url, p, np, m, s, c)
return imgs, imgs, msg
btn_step2.click(
fn=on_img_gen,
inputs=[sd_url, res_prompt, neg_prompt, sd_model, steps, cfg_scale],
outputs=[gallery, state_images, status_bar]
)
btn_export.click(
fn=one_click_export,
inputs=[res_title, res_content, state_images],
outputs=[export_msg]
)
app.load(fn=get_sd_models, inputs=[sd_url], outputs=[sd_model, status_bar])
if __name__ == "__main__":
app.launch(inbrowser=True)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 99 KiB

View File

@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-02-24

View File

@ -0,0 +1,85 @@
## Context
当前项目是一个单机运行的小红书 AI 爆文自动化工具,使用 Gradio 作为 UI 框架SQLite 为发布队列持久化JSON 文件存储配置与分析数据。随着功能迭代至 V2.0,主文件 `main.py` 已超过 4400 行并积累了多处生产风险API Key 以明文写入 `config.json`、全局列表变量被多个回调线程无锁读写、`_temp_publish/` 目录的临时图片在发布后未清理、JSON 文件采用直接覆盖写入(电量耗尽或进程中断时会写出空文件)、发布前无标题/正文/图片数量的校验。
## Goals / Non-Goals
**Goals:**
- 消除 6 项已知的生产风险,使项目能安全、稳定地长期自动运营
- 保持所有 Gradio 回调函数签名不变(不破坏现有 UI 绑定)
- 每项改动独立可部署,不相互耦合
- 为后续功能拓展提供更清晰的代码结构基础
**Non-Goals:**
- 重写为 Web 服务或引入数据库 ORM
- 改变现有 UI 交互逻辑或视觉设计
- 实现性能优化或多账号支持
- 修改 MCP / SD / LLM 服务接口
## Decisions
### 决策 1敏感配置存储方案选 keyring + 环境变量覆盖,放弃纯加密文件
**选项 A选定**: 使用 `keyring` 库将 API Key 存入系统凭证管理器Windows Credential Manager / macOS Keychain / Linux Secret Service`config.json` 中仅保留占位符 `"[keyring]"`;同时支持 `AUTOBOT_API_KEY_<NAME>` 环境变量在无 GUI 场景Docker / CI下覆盖。
**选项 B放弃**: 自行用 `Fernet` 加密后写入 `config.json`。缺点:密钥仍需本地存储,安全提升有限,且增加密钥管理复杂度。
**选项 C放弃**: 要求用户全部改用环境变量。对当前 Gradio UI 用户体验极差,用户每次重启需重新设置。
**结论**`keyring` 方案对 Windows 单机用户最为透明,且 Docker 场景通过环境变量无缝降级。新增 `ConfigManager.get_secure(key)` / `set_secure(key, value)` 接口,内部优先读取环境变量,其次读 keyring最后回退旧版明文自动迁移一次
---
### 决策 2全局缓存改用模块级 `threading.RLock` 保护,不引入新类
**选项 A选定**: 在 `main.py` 模块顶层声明一个 `_cache_lock = threading.RLock()`,所有读写 `_cached_proactive_entries` / `_cached_my_note_entries` 的函数用 `with _cache_lock:` 包裹。
**选项 B放弃**: 封装为 `CacheManager` 类。当前代码耦合 Gradio UI 较深,引入类会导致较大重构,收益不成比例。
**结论**:最小侵入方案,改动 5 处函数约 20 行,可在一个 PR 内完成。
---
### 决策 3JSON 原子写使用 tempfile + os.replace
Python 标准库 `tempfile.NamedTemporaryFile` + `os.replace`(同目录)在 POSIX 和 Windows 均为原子操作Windows Vista+ 支持)。无需引入新依赖。
适用范围:`ConfigManager.save()``AnalyticsService._save_analytics()``AnalyticsService._save_weights()`
---
### 决策 4临时文件清理在发布回调内同步执行
`publish_to_xhs()` 函数的 try/finally 块中清理 `_temp_publish/` 下以本次调用 `ai_N.jpg` 命名的文件。不使用全目录清空以免并发发布时误删其他会话文件Gradio 多用户虽少见,但更安全)。逐文件删除,删除失败仅打印 warning不阻断流程。
---
### 决策 5发布校验集中在 `publish_to_xhs()` 一处,不分散到 UI
校验逻辑写在业务函数中,而非 Gradio 的 `gr.Textbox` 校验器,保证逻辑可测试,且 UI 重构时不会遗失。
---
### 决策 6UI 拆分采用渐进式迁移,不一次性重写
`ui/` 目录为目标,先提取最大的 Tab内容创作 Tab`ui/tab_create.py`,返回 `(components, callbacks)` 元组,`main.py` 调用并注册。其余 Tab 在后续迭代中逐步迁移。本次变更只迁移 `ui/tab_create.py` 作为示范,不强制完成全部拆分。
## Risks / Trade-offs
| 风险 | 缓解措施 |
|------|---------|
| `keyring` 在部分 Linux headless 环境无后端可用 | 检测到 `keyring.errors.NoKeyringError` 时降级为明文(打印警告),行为与改造前一致 |
| `os.replace` 在跨卷(不同磁盘分区)时失败 | 使用 `tempfile.mkstemp(dir=same_dir)` 确保临时文件与目标同卷 |
| 并发发布时 `ai_N.jpg` 命名冲突 | 当前 Gradio 为单进程单用户;若未来支持多用户,改用 UUID 命名 |
| UI 拆分期间双重维护 | 每次只迁移一个 Tab迁移完成前旧代码仍有效 |
## Migration Plan
1. **无需数据迁移**`config.json` 的旧明文 Key 在首次调用 `get_secure()` 时自动读取并写入 keyring同时将 `config.json` 中对应字段替换为 `"[keyring]"` 占位符,单次完成。
2. **依赖安装**`pip install keyring` 并更新 `requirements.txt`
3. **无回滚风险**:各项改动均向后兼容,若 keyring 不可用则自动回退明文模式。
## Open Questions
- 是否需要在 UI 上增加「导出加密备份配置」功能,方便用户迁移设备?(本次范围外,记录待后续评估)
- `ui/` 拆分后是否需要引入 pytest-gradio 进行 UI 层单元测试?(本次不实施)

View File

@ -0,0 +1,39 @@
## Why
当前项目已具备完整的核心功能(文案生成、图片绘制、发布队列、评论管家、数据分析),但在安全性、健壮性、可维护性上存在若干生产级隐患:配置文件明文保存 API Key、全局可变状态缺乏线程锁、临时文件无清理机制、单文件超 4400 行难以维护。这些问题会在长期运营中导致数据泄露、并发竞态或内存增长,亟需在持续迭代前统一修复。
## What Changes
- **安全加固**`config.json` 中的 API Key 等敏感字段改用操作系统 keyring / 环境变量存储,文件中只保留非敏感配置
- **线程安全**:全局笔记缓存 (`_cached_proactive_entries` / `_cached_my_note_entries`) 及 `ConfigManager` 写操作加 `threading.Lock`
- **临时文件清理**:发布完成或失败后自动清理 `_temp_publish/` 目录下的临时图片
- **原子写文件**`analytics_service.py``config_manager.py` 的 JSON 持久化改为「写临时文件 → rename」方式防止写中断导致数据损坏
- **发布前输入校验**标题长度≤20字、正文长度、图片数量1-18张在提交发布前统一校验并给出明确提示
- **代码拆分**:将 `main.py` 的 Gradio UI 构建与业务逻辑分离,拆分为 `ui/` 目录下的多个 tab 模块,主文件只负责组装
## Capabilities
### New Capabilities
- `secure-config`:安全配置管理——敏感字段加密/外置存储,支持环境变量覆盖
- `thread-safe-cache`:线程安全的笔记列表缓存管理器,替换全局裸列表
- `temp-file-lifecycle`:临时发布文件的自动生命周期管理(创建→使用→清理)
- `atomic-persistence`JSON 持久化原子写操作,防止文件损坏
- `publish-input-validation`:发布前内容合规校验(长度/图片数/必填项)
- `ui-module-split`:将 `main.py` UI 构建逻辑拆分为独立 tab 模块
### Modified Capabilities
(无现有 spec首次建立规范
## Impact
- **`config_manager.py`**`save()` 方法改为原子写;新增 `get_secure()` / `set_secure()` 接口
- **`analytics_service.py`**`_save_analytics()` / `_save_weights()` 改为原子写
- **`publish_queue.py`**:无需修改(已使用 SQLite WAL自身较健壮
- **`main.py`**
- 全局缓存变量引入 `threading.Lock`
- `publish_to_xhs()` 增加校验逻辑与 temp 清理
- UI 构建代码逐步迁移至 `ui/tab_*.py`
- **`requirements.txt`**:可能新增 `keyring` 依赖
- **无破坏性 API 变更**:所有 Gradio 回调签名保持不变

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: JSON 文件写入使用原子操作
`ConfigManager.save()``AnalyticsService._save_analytics()``AnalyticsService._save_weights()` SHALL 使用「写临时文件 → `os.replace()` 原子重命名」的方式持久化数据。临时文件 SHALL 创建于与目标文件相同的目录(同卷),以确保 `os.replace()` 的原子性。
#### Scenario: 写入过程中进程中断不产生损坏文件
- **WHEN** JSON 写入过程中进程被强制终止
- **THEN** 目标文件保持写入前的完整状态,不出现空文件或半写入的 JSON
#### Scenario: 正常写入成功替换目标文件
- **WHEN** `ConfigManager.save()` 被调用且数据合法
- **THEN** 目标 `config.json` 被更新为最新内容,写入前存在的临时文件已被清理
#### Scenario: 临时文件与目标文件在同一目录
- **WHEN** 调用任意原子写函数
- **THEN** 临时文件的父目录与目标文件的父目录相同(通过 `tempfile.mkstemp(dir=<target_dir>)` 实现)

View File

@ -0,0 +1,30 @@
## ADDED Requirements
### Requirement: 发布前校验标题、正文和图片
`publish_to_xhs()` 函数 SHALL 在调用 MCP 发布接口前执行以下校验,任何校验失败 SHALL 立即返回包含明确说明的错误消息字符串,不发起网络请求:
| 字段 | 规则 |
|------|------|
| 标题 | 非空,长度 ≤ 20 个字符(中英文均按 1 字符计) |
| 图片数量 | 至少 1 张,至多 18 张 |
| 图片文件 | 每个路径对应的文件在磁盘上真实存在 |
#### Scenario: 标题超长时返回明确错误
- **WHEN** `publish_to_xhs()` 被调用且标题字符数超过 20
- **THEN** 返回包含「标题超长」提示及当前字符数的错误字符串,不调用 MCP 接口
#### Scenario: 无图片时返回明确错误
- **WHEN** `publish_to_xhs()` 被调用且最终收集到的图片路径列表为空
- **THEN** 返回「至少需要 1 张图片」的错误字符串
#### Scenario: 图片数量超限时返回明确错误
- **WHEN** 最终图片路径列表超过 18 张
- **THEN** 返回包含当前图片数和限制数的错误字符串,不发起发布请求
#### Scenario: 图片文件不存在时返回明确错误
- **WHEN** 图片路径列表中有路径对应的文件不存在于磁盘
- **THEN** 返回包含该文件路径的「文件不存在」错误字符串
#### Scenario: 校验通过后正常发布
- **WHEN** 所有字段均通过校验
- **THEN** 正常调用 MCP 接口发布,行为与改造前一致

View File

@ -0,0 +1,20 @@
## ADDED Requirements
### Requirement: 敏感字段通过系统 keyring 或环境变量存储
ConfigManager SHALL 提供 `get_secure(key: str) -> str``set_secure(key: str, value: str)` 接口用于读写需要保护的配置项API Key 等)。读取优先级:环境变量 `AUTOBOT_<KEY>` > 系统 keyring > `config.json` 明文(兼容旧版,读取后自动迁移)。`config.json` 中已迁移的字段值替换为占位符字符串 `"[keyring]"`
#### Scenario: 首次读取明文 API Key 时自动迁移
- **WHEN** 调用 `get_secure("api_key")``config.json` 中该字段为普通字符串(非占位符)
- **THEN** 系统将该值写入 keyring`config.json` 中该字段更新为 `"[keyring]"`,并返回原始值
#### Scenario: keyring 不可用时降级为明文
- **WHEN** 系统 keyring 后端不可用(抛出 `NoKeyringError`)且无对应环境变量
- **THEN** `get_secure()` 直接读取 `config.json` 中的明文值,并打印 WARNING 日志,不抛出异常
#### Scenario: 环境变量优先于 keyring
- **WHEN** 环境变量 `AUTOBOT_API_KEY` 已设置且 keyring 中也有相同 key 的值
- **THEN** `get_secure("api_key")` 返回环境变量的值
#### Scenario: 通过 UI 设置新的 API Key
- **WHEN** 用户在 Gradio UI 中输入新的 API Key 并保存
- **THEN** 调用 `set_secure("api_key", value)` 将值存入 keyring或在降级模式下写入 `config.json`UI 不显示原始值

View File

@ -0,0 +1,23 @@
## ADDED Requirements
### Requirement: 发布完成后清理本次生成的临时图片文件
`publish_to_xhs()` 函数 SHALL 在发布流程(无论成功或失败)结束后,删除本次调用写入 `_temp_publish/` 目录的 AI 生成临时图片文件。删除失败 SHALL 仅记录 WARNING 日志,不影响返回结果。
#### Scenario: 发布成功后临时文件被清理
- **WHEN** `publish_to_xhs()` 发布成功并返回成功消息
- **THEN** 本次写入的所有 `ai_N.jpg` 临时文件已从磁盘删除
#### Scenario: 发布失败后临时文件同样被清理
- **WHEN** `publish_to_xhs()` 因网络错误等原因抛出异常或返回失败消息
- **THEN** 本次写入的所有 `ai_N.jpg` 临时文件已从磁盘删除
#### Scenario: 清理失败不阻断主流程
- **WHEN** 临时文件删除时抛出 `OSError`(如文件已被其他进程占用)
- **THEN** 系统记录 WARNING 日志并继续,`publish_to_xhs()` 的返回值不受影响
### Requirement: 不清理其他会话的临时文件
发布清理逻辑 SHALL 只删除本次调用写入的文件(通过追踪写入路径列表),不执行 `_temp_publish/` 目录的全量清空。
#### Scenario: 并发发布场景下不误删其他文件
- **WHEN** 两次发布准备流程同时写入 `_temp_publish/` 目录
- **THEN** 每次清理只删除自己写入的文件,不影响另一次的文件

View File

@ -0,0 +1,19 @@
## ADDED Requirements
### Requirement: 笔记列表缓存读写受互斥锁保护
模块级全局变量 `_cached_proactive_entries``_cached_my_note_entries` 的所有读写操作 SHALL 在 `threading.RLock` 的保护下执行,以防止 Gradio 回调并发调用时产生数据竞态。
#### Scenario: 并发刷新时缓存更新不出现竞态
- **WHEN** 两个 Gradio 回调线程同时调用 `_fetch_and_cache()`
- **THEN** 最终缓存状态为其中一次完整写入的结果,不出现部分更新或列表长度异常
#### Scenario: 读取缓存时不被并发写入中断
- **WHEN** `_pick_from_cache()` 正在迭代缓存列表时,另一线程触发缓存更新
- **THEN** 迭代过程不抛出 `RuntimeError: list changed size during iteration`
### Requirement: 缓存操作封装为受保护的工具函数
模块 SHALL 提供 `_set_cache(name, entries)``_get_cache(name)` 两个内部函数,统一管理缓存读写,不在业务函数中直接赋值全局列表。
#### Scenario: 缓存写入通过统一接口
- **WHEN** 任意函数需要更新笔记缓存
- **THEN** 必须调用 `_set_cache(name, entries)` 而非直接赋值 `_cached_*` 变量

View File

@ -0,0 +1,19 @@
## ADDED Requirements
### Requirement: 内容创作 Tab 的 UI 代码迁移至独立模块
`ui/tab_create.py` SHALL 包含原 `main.py` 中「内容创作 Tab」的全部 Gradio 组件定义和事件绑定,并导出 `build_tab() -> None` 函数,该函数接受一个 `gr.Blocks` 上下文,在其中构建 Tab 内容。`main.py` SHALL 通过 `from ui.tab_create import build_tab` 调用该函数,不在主文件中保留重复的组件代码。
#### Scenario: main.py 正常启动并显示内容创作 Tab
- **WHEN** 运行 `python main.py` 启动 Gradio 应用
- **THEN** 内容创作 Tab 正常显示,所有组件与迁移前功能一致
#### Scenario: tab_create 模块可独立导入
- **WHEN** 在 Python 中执行 `from ui.tab_create import build_tab`
- **THEN** 不抛出任何导入错误,`build_tab` 为可调用对象
### Requirement: ui/ 目录结构规范
`ui/` 目录 SHALL 包含 `__init__.py`,每个 Tab 模块文件命名约定为 `tab_<name>.py`,不在 Tab 模块中直接调用全局服务初始化代码(如 `ConfigManager()``LLMService()` 等单例初始化应由 `main.py` 完成并通过参数或模块级引用传入)。
#### Scenario: 新增 Tab 模块的标准结构
- **WHEN** 开发者创建新的 `ui/tab_*.py` 文件
- **THEN** 该文件导出 `build_tab(...)` 函数,且顶层不包含副作用代码(不在 import 时触发服务连接)

View File

@ -0,0 +1,55 @@
## 1. 依赖与环境准备
- [x] 1.1 在 `requirements.txt` 中添加 `keyring>=24.0.0`
- [x] 1.2 运行 `pip install keyring` 并验证在当前系统Windows可正常使用 `keyring.get_password` / `keyring.set_password`
## 2. 安全配置secure-config
- [x] 2.1 在 `config_manager.py` 中新增 `get_secure(key: str) -> str` 方法:优先读取环境变量 `AUTOBOT_<KEY.upper()>`,其次读取系统 keyring最后回退 `config.json` 明文(自动迁移一次),捕获 `keyring.errors.NoKeyringError` 并降级
- [x] 2.2 在 `config_manager.py` 中新增 `set_secure(key: str, value: str)` 方法:写入系统 keyring降级模式下写入 `config.json`),并将 `config.json` 对应字段更新为占位符 `"[keyring]"`
- [x] 2.3 将 `main.py` 中 LLM 提供商的 `api_key` 读写全部替换为 `cfg.get_secure()` / `cfg.set_secure()`
- [ ] 2.4 手动测试:重启应用后 `config.json``api_key` 已变为 `"[keyring]"`LLM 连接功能正常
## 3. JSON 原子写atomic-persistence
- [x] 3.1 在 `config_manager.py``save()` 方法中将直接 `open(CONFIG_FILE, "w")` 改为 `tempfile.mkstemp(dir=<same_dir>)` + 写入 + `os.replace()` 原子重命名
- [x] 3.2 在 `analytics_service.py``_save_analytics()` 方法中同样改为原子写
- [x] 3.3 在 `analytics_service.py``_save_weights()` 方法中同样改为原子写
- [ ] 3.4 测试:在写入过程中(`time.sleep` 模拟)验证目标文件仍完整,临时文件被清理
## 4. 线程安全缓存thread-safe-cache
- [x] 4.1 在 `main.py` 顶部声明 `_cache_lock = threading.RLock()`
- [x] 4.2 新增内部函数 `_set_cache(name: str, entries: list)``_get_cache(name: str) -> list`,内部使用 `with _cache_lock:` 保护
- [x] 4.3 将 `_fetch_and_cache()` 中对 `_cached_proactive_entries` / `_cached_my_note_entries` 的直接赋值改为调用 `_set_cache()`
- [x] 4.4 将 `_pick_from_cache()` 中读取缓存改为调用 `_get_cache()`(在锁内完成列表快照拷贝)
- [x] 4.5 将 `fetch_my_notes()` 中对 `_cached_my_note_entries` 的直接赋值改为调用 `_set_cache()`
## 5. 发布前输入校验publish-input-validation
- [x] 5.1 在 `publish_to_xhs()` 函数内、MCP 调用前添加标题长度校验(`len(title) > 20` 返回错误)
- [x] 5.2 添加图片数量下限校验(`len(image_paths) == 0` 返回「至少需要 1 张图片」)
- [x] 5.3 添加图片数量上限校验(`len(image_paths) > 18` 返回含实际数量的错误消息)
- [x] 5.4 添加图片文件存在性校验(遍历 `image_paths`,发现不存在的文件时返回含路径的错误)
- [x] 5.5 在 Gradio UI 的发布按钮标题输入框旁添加字符计数提示(`gr.Textbox``info` 参数)
## 6. 临时文件生命周期temp-file-lifecycle
- [x] 6.1 在 `publish_to_xhs()` 中记录本次写入的 AI 临时图片路径到局部变量 `ai_temp_files = []`
- [x] 6.2 在函数末尾添加 `finally:` 块,遍历 `ai_temp_files` 逐一调用 `os.remove()`,捕获 `OSError` 仅记录 `logger.warning`
- [ ] 6.3 验证:发布成功后 `_temp_publish/` 目录中的 `ai_*.jpg` 文件已被删除;发布失败后同样被清理
## 7. UI 模块拆分ui-module-split
- [x] 7.1 创建 `ui/` 目录,添加 `ui/__init__.py`(空文件)
- [x] 7.2 创建 `ui/tab_create.py`,将 `main.py` 中「内容创作 Tab」的所有 Gradio 组件定义和 `.click()` / `.change()` 事件绑定代码迁移至该文件,导出 `build_tab(cfg, mcp_url_box, ...)` 函数
- [x] 7.3 在 `main.py` 中用 `from ui.tab_create import build_tab` + 调用替换原有内容创作 Tab 代码
- [ ] 7.4 启动应用,验证内容创作 Tab 功能与迁移前完全一致(文案生成、图片生成、发布按钮均正常)
## 8. 集成验证
- [ ] 8.1 启动应用依次测试LLM 连接 → 文案生成 → 图片生成 → 发布(包含校验不通过和校验通过两个场景)
- [ ] 8.2 检查 `config.json` 中无明文 API Key
- [ ] 8.3 检查 `_temp_publish/` 目录在发布后为空(或只含本次以外的文件)
- [ ] 8.4 检查 `autobot.log` 中无 ERROR 级别日志

View File

@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-02-24

View File

@ -0,0 +1,119 @@
## Context
`main.py` 是整个项目的单一入口,目前 4359 行,包含 10+ 个业务域的业务逻辑、全局状态、UI 组件和事件绑定。上一轮重构已将 Tab 1内容创作提取为 `ui/tab_create.py`,建立了 Tab 模块化的模式。本次设计延续该模式,将业务逻辑提取为 `services/` 目录,并完成剩余 Tab 的 UI 拆分。
已有外部服务层:`config_manager.py``llm_service.py``sd_service.py``mcp_client.py``analytics_service.py``publish_queue.py`
`main.py` 中剩余的函数是对这些服务的**编排层**,应该独立成模块而非继续膨胀在入口文件里。
约束:
- Python 3.10+(当前环境)
- 不引入新的外部依赖
- 重构不改变任何业务行为
## Goals / Non-Goals
**Goals:**
- 将 `main.py` 瘦身至 ~300 行纯入口层(导入 + UI 组装 + `app.launch()`
- 建立清晰的分层:`services/`(业务编排)→ `ui/`Gradio 组件)→ `main.py`(入口)
- 消除模块间隐式依赖,所有依赖通过函数参数显式传递
- 保持所有函数签名、Gradio 回调绑定不变
**Non-Goals:**
- 不重构或重写任何现有业务逻辑(本次是纯搬迁)
- 不改变 `config_manager.py``llm_service.py` 等已有服务层
- 不引入类/面向对象重构(保持现有函数式风格)
- 不添加单元测试(独立变更)
## Decisions
### D1分层架构 —— `services/` 不依赖 `ui/``ui/` 不依赖 `services/`
```
main.py (入口层:组装 + 启动)
├── services/ (业务编排层:纯 Python无 Gradio)
│ ├── connection.py
│ ├── content.py
│ ├── hotspot.py
│ ├── engagement.py
│ ├── rate_limiter.py
│ ├── profile.py
│ ├── persona.py
│ ├── scheduler.py
│ ├── queue_ops.py
│ └── autostart.py
├── ui/ (UI 层Gradio 组件 + 事件绑定)
│ ├── tab_create.py ← 已存在
│ ├── tab_hotspot.py
│ ├── tab_engage.py
│ ├── tab_profile.py
│ ├── tab_auto.py
│ ├── tab_queue.py
│ ├── tab_analytics.py
│ └── tab_settings.py
```
**为何不让 `ui/` 依赖 `services/`** 每个 `build_tab()` 接收回调函数作为参数(已有 `tab_create.py` 的模式),而非直接 import service。这样 UI 层完全解耦,可独立测试或替换。
### D2共享单例通过 `main.py` 初始化,作为参数传入
`cfg``mcp``analytics``pub_queue``queue_publisher` 仍在 `main.py` 顶层初始化。Service 函数需要它们时通过**函数参数**接收,不在 service 模块顶层 import。
**为何不在各 service 模块初始化单例:** 防止循环依赖、防止多次初始化、保持测试时可替换。
### D3有状态模块使用模块级变量不封装成类
`rate_limiter``scheduler``engagement` 内部有 `threading.Event``_daily_stats` 等状态。保持现有模块级变量风格(不改成类),仅将变量和函数整体搬迁到对应模块。
**为何不改成类:** 本次目标是结构拆分,不是重构设计模式,避免引入额外变更风险。
### D4迁移策略 —— 先提取后删除,不做重定向
每个域的迁移步骤:
1. 在 service 模块中写入函数(复制粘贴 + 调整 import
2. 在 `main.py` 中删除对应函数,改为 `from services.xxx import ...`
3. 运行 `ast.parse()` 验证语法
4. 运行应用验证启动不报错
**为何不做 `main.py` 中的临时 re-export** 简单场景直接删+导入更清晰,且 Gradio 回调绑定在 `main.py` 中通过变量名引用,只需保证同名变量在作用域内即可。
### D5UI Tab 模块统一使用 `build_tab(fn_*, ...)` 签名
复用 `tab_create.py` 已建立的模式:
- 每个 `build_tab()` 接收所需的回调函数和共享 Gradio 组件作为参数
- 函数内部创建本 Tab 的所有 Gradio 组件及事件绑定
- 返回 `dict`,包含需要被其他 Tab 或 `app.load()` 引用的组件
### D6`services/``ui/` 均需 `__init__.py`
使用空文件标记为 Python 包,与 `ui/__init__.py` 已有做法一致。
## Risks / Trade-offs
- **[风险] 大量 import 调整可能遗漏** → 每个模块完成后执行 `ast.parse()` + 应用启动验证,逐域推进
- **[风险] 全局状态的隐式共享** → 调度器、限流器的模块级变量在模块首次 import 时初始化Python 模块单例语义保证只初始化一次,行为与当前一致
- **[权衡] `build_tab()` 参数列表长** → 与现有 `tab_create.py` 的做法一致,接受这种显式依赖的冗长性,可在后续变更中引入 dataclass 参数包
## Migration Plan
按以下顺序逐域提取,每步验证后再继续:
1. `services/rate_limiter.py` —— 无外部依赖,最安全的起点
2. `services/autostart.py` —— 独立,平台相关逻辑隔离
3. `services/persona.py` —— 仅依赖 `cfg`
4. `services/connection.py` —— 依赖 `cfg``llm_service``sd_service``mcp_client`
5. `services/profile.py` —— 依赖 `mcp_client`
6. `services/hotspot.py` —— 依赖 `llm_service``mcp_client`
7. `services/content.py` —— 依赖多个服务,最复杂
8. `services/engagement.py` —— 依赖 `rate_limiter``mcp_client`
9. `services/scheduler.py` —— 依赖 `engagement``content`
10. `services/queue_ops.py` —— 依赖 `content``pub_queue`
11. `ui/tab_hotspot.py` ~ `ui/tab_settings.py` —— 7 个 Tab UI 拆分
**回滚策略:** 所有修改通过 git 追踪;每个 service 提取为一个独立 commit任意步骤可 `git revert`
## Open Questions
- `_auto_log` 列表(被 `engagement``scheduler` 共同写入)归属哪个模块?
→ 暂定置于 `services/scheduler.py``engagement` 接收 `log_fn` 回调参数
- `queue_publisher` 的 callback 注册(`set_publish_callback`)在哪里调用?
→ 保留在 `main.py` 初始化段callback 函数迁移到 `services/queue_ops.py`

View File

@ -0,0 +1,38 @@
## Why
`main.py` 目前共 4359 行将连接管理、内容生成、自动化运营、调度、队列、UI 等 10+ 个业务域全部混入单一文件,导致阅读困难、修改风险高、模块间依赖不清晰。随着功能继续增长,维护成本将持续上升。现在是在文件进一步膨胀前完成结构化拆分的最佳时机。
## What Changes
- 按业务域将 `main.py` 中的函数提取为独立的 `services/` 模块
- 将剩余 UI Tab 提取为独立的 `ui/tab_*.py` 模块(`tab_create.py` 已完成,需继续完成其余 Tab
- `main.py` 保留为**入口层**:仅负责组装 Gradio UI、注册事件、启动应用
- 所有模块保持向后兼容,不改变对外行为
## Capabilities
### New Capabilities
- `services-connection`: LLM / SD / MCP 连接管理(`connect_llm``connect_sd``check_mcp_status`、登录相关)
- `services-content`: 内容生成(`generate_copy``generate_images``publish_to_xhs``one_click_export`、face image 上传)
- `services-hotspot`: 热点探测(`search_hotspots``analyze_and_suggest``generate_from_hotspot`
- `services-engagement`: 互动自动化(`auto_comment_once``auto_like_once``auto_favorite_once``auto_reply_once` 及对应 `_with_log` 包装)
- `services-rate-limiter`: 频率控制与每日限额(`_reset_daily_stats_if_needed``_check_daily_limit``_is_in_cooldown` 等)
- `services-profile`: 用户主页解析(`fetch_my_profile``_parse_profile_json``_parse_count`
- `services-persona`: 人设管理(`_match_persona_pools``get_persona_topics``get_persona_keywords``on_persona_changed`
- `services-scheduler`: 自动调度器(`_scheduler_loop``start_scheduler``stop_scheduler``get_scheduler_status`
- `services-queue`: 内容排期队列(`generate_to_queue``queue_*` 系列函数、`_queue_publish_callback`
- `services-autostart`: 开机自启管理(`enable_autostart``disable_autostart``toggle_autostart` 等)
- `ui-tabs-split`: 将其余 Gradio Tab热点、互动、我的主页、自动运营、队列、数据分析、设置提取为 `ui/tab_*.py`
### Modified Capabilities
(无需求层面变更,仅为实现重构)
## Impact
- **主要受影响文件**`main.py`(从 4359 行缩减至 ~300 行入口层)
- **新增目录**`services/`10 个模块)、`ui/`8 个 Tab 模块,`tab_create.py` 已存在)
- **依赖关系**`services/` 模块之间通过函数参数传递依赖,避免循环导入;`main.py` 统一导入并组装
- **无 API 变更**所有函数签名保持不变Gradio 回调绑定不受影响
- **运行时影响**:零,重构不改变业务逻辑

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 开机自启管理函数迁移至独立模块
系统 SHALL 将开机自启相关常量和函数从 `main.py` 提取至 `services/autostart.py`,包括:`_APP_NAME``_STARTUP_REG_KEY``_get_startup_script_path``_get_startup_bat_path``_create_startup_scripts``is_autostart_enabled``enable_autostart``disable_autostart``toggle_autostart`
#### Scenario: 模块导入成功
- **WHEN** `main.py` 执行 `from services.autostart import toggle_autostart, is_autostart_enabled`
- **THEN** 函数可正常调用
#### Scenario: Windows 注册表操作行为不变
- **WHEN** `enable_autostart()` 在 Windows 系统上被调用
- **THEN** SHALL 向注册表 `HKCU\Software\Microsoft\Windows\CurrentVersion\Run` 写入启动项,行为与迁移前完全一致
#### Scenario: 非 Windows 平台处理不变
- **WHEN** `enable_autostart()` 在非 Windows 系统上被调用
- **THEN** SHALL 返回与迁移前相同的平台不支持提示信息

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 连接管理函数迁移至独立模块
系统 SHALL 将所有 LLM / SD / MCP 连接管理及认证相关函数从 `main.py` 提取至 `services/connection.py`,包括:`_get_llm_config``connect_llm``add_llm_provider``remove_llm_provider``on_provider_selected``connect_sd``on_sd_model_change``check_mcp_status``get_login_qrcode``logout_xhs``_auto_fetch_xsec_token``check_login``save_my_user_id``upload_face_image``load_saved_face_image`
#### Scenario: 模块导入成功
- **WHEN** `main.py` 执行 `from services.connection import connect_llm, connect_sd` 等导入
- **THEN** 所有函数可正常调用,行为与迁移前完全一致
#### Scenario: 外部依赖通过参数传入
- **WHEN** `services/connection.py` 中的函数需要访问 `cfg``llm``LLMService`)、`sd``SDService`)、`mcp``MCPClient`
- **THEN** 这些依赖 SHALL 通过函数参数接收,`services/connection.py` 模块顶层不创建单例实例
#### Scenario: 无循环导入
- **WHEN** Python 解释器加载 `services/connection.py`
- **THEN** 不产生 `ImportError` 或循环导入错误

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 内容生成函数迁移至独立模块
系统 SHALL 将内容生成、图片生成、发布及导出相关函数从 `main.py` 提取至 `services/content.py`,包括:`generate_copy``generate_images``one_click_export``publish_to_xhs`
#### Scenario: 模块导入成功
- **WHEN** `main.py` 执行 `from services.content import generate_copy, generate_images, publish_to_xhs, one_click_export`
- **THEN** 所有函数可正常调用,行为与迁移前完全一致
#### Scenario: 内容生成保留现有验证逻辑
- **WHEN** 调用 `publish_to_xhs` 时标题超过 20 字或图片数量不合法
- **THEN** 函数 SHALL 返回与迁移前相同的错误提示,不改变验证行为
#### Scenario: 临时文件清理逻辑保留
- **WHEN** `publish_to_xhs` 执行完毕(成功或失败)
- **THEN** `finally` 块中的 AI 临时文件清理逻辑 SHALL 正常执行

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 互动自动化函数迁移至独立模块
系统 SHALL 将评论、点赞、收藏、回复等互动自动化函数从 `main.py` 提取至 `services/engagement.py`,包括:`load_note_for_comment``ai_generate_comment``send_comment``fetch_my_notes``on_my_note_selected``fetch_my_note_comments``ai_reply_comment``send_reply``auto_comment_once``_auto_comment_with_log``auto_like_once``_auto_like_with_log``auto_favorite_once``_auto_favorite_with_log``auto_reply_once``_auto_reply_with_log``_auto_publish_with_log`
#### Scenario: 模块导入成功
- **WHEN** `main.py` 执行 `from services.engagement import auto_comment_once, auto_like_once` 等导入
- **THEN** 所有函数可正常调用
#### Scenario: 日志回调参数化
- **WHEN** `engagement.py` 中的 `_with_log` 函数需要追加日志时
- **THEN** 函数 SHALL 接收 `log_fn` 参数callable用于写入日志不直接依赖外部 `_auto_log` 列表
#### Scenario: 频率限制集成
- **WHEN** `auto_comment_once` 等函数执行前需要检查每日限额和冷却状态
- **THEN** 通过调用 `rate_limiter` 模块中的函数实现,不在 `engagement.py` 内复制限流逻辑

View File

@ -0,0 +1,12 @@
## ADDED Requirements
### Requirement: 热点探测函数迁移至独立模块
系统 SHALL 将热点搜索与分析相关函数从 `main.py` 提取至 `services/hotspot.py`,包括:`search_hotspots``analyze_and_suggest``generate_from_hotspot``_set_cache``_get_cache``_fetch_and_cache``_pick_from_cache``fetch_proactive_notes``on_proactive_note_selected`
#### Scenario: 模块导入成功
- **WHEN** `main.py` 执行 `from services.hotspot import search_hotspots, analyze_and_suggest` 等导入
- **THEN** 所有函数可正常调用
#### Scenario: 线程安全缓存随模块迁移
- **WHEN** `_cache_lock``threading.RLock`)随函数一起迁移至 `services/hotspot.py`
- **THEN** `_set_cache` / `_get_cache` 的线程安全行为保持不变

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 人设管理函数及常量迁移至独立模块
系统 SHALL 将人设相关的常量和函数从 `main.py` 提取至 `services/persona.py`,包括:`DEFAULT_PERSONAS``RANDOM_PERSONA_LABEL``PERSONA_POOL_MAP``DEFAULT_TOPICS``DEFAULT_STYLES``DEFAULT_COMMENT_KEYWORDS``_match_persona_pools``get_persona_topics``get_persona_keywords``on_persona_changed``_resolve_persona`
#### Scenario: 常量可从模块导入
- **WHEN** `main.py` 执行 `from services.persona import DEFAULT_PERSONAS, PERSONA_POOL_MAP`
- **THEN** 常量值 SHALL 与迁移前完全一致
#### Scenario: 人设解析正确处理随机人设标签
- **WHEN** `_resolve_persona(RANDOM_PERSONA_LABEL)` 被调用
- **THEN** SHALL 返回从人设池中随机选取的人设文本,行为与迁移前一致
#### Scenario: 人设变更回调正常触发
- **WHEN** `on_persona_changed(persona_text)` 被调用
- **THEN** SHALL 返回更新后的话题列表和关键词列表,供 Gradio UI 使用

View File

@ -0,0 +1,12 @@
## ADDED Requirements
### Requirement: 用户主页解析函数迁移至独立模块
系统 SHALL 将用户主页数据获取与解析相关函数从 `main.py` 提取至 `services/profile.py`,包括:`_parse_profile_json``_parse_count``fetch_my_profile`
#### Scenario: 模块导入成功
- **WHEN** `main.py` 执行 `from services.profile import fetch_my_profile`
- **THEN** 函数可正常调用,行为与迁移前一致
#### Scenario: 解析容错性保留
- **WHEN** `_parse_count` 接收到格式异常的数值字符串(如 "1.2万"、"--"
- **THEN** SHALL 返回与迁移前相同的浮点数或 0不抛出异常

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 排期队列操作函数迁移至独立模块
系统 SHALL 将内容排期队列相关函数从 `main.py` 提取至 `services/queue_ops.py`,包括:`generate_to_queue``_queue_publish_callback``queue_refresh_table``queue_refresh_calendar``queue_preview_item``queue_approve_item``queue_reject_item``queue_delete_item``queue_retry_item``queue_publish_now``queue_start_processor``queue_stop_processor``queue_get_status``queue_batch_approve``queue_generate_and_refresh`
#### Scenario: 模块导入成功
- **WHEN** `main.py` 执行 `from services.queue_ops import queue_generate_and_refresh, queue_refresh_table` 等导入
- **THEN** 所有函数可正常调用
#### Scenario: publish callback 在 main.py 完成注册
- **WHEN** 应用启动时 `main.py` 调用 `pub_queue.set_publish_callback(_queue_publish_callback)``_queue_publish_callback` 已迁移至 `queue_ops.py`
- **THEN** 队列发布回调 SHALL 正常注册并在队列处理时触发
#### Scenario: 队列操作读写 pub_queue 单例
- **WHEN** `queue_ops.py` 中的函数需要访问 `pub_queue``queue_publisher`
- **THEN** 这些单例 SHALL 通过函数参数传入,不在 `queue_ops.py` 模块顶层初始化

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 频率控制与每日限额函数迁移至独立模块
系统 SHALL 将频率控制、每日限额及冷却相关的所有状态变量和函数从 `main.py` 提取至 `services/rate_limiter.py`,包括:`_auto_running``_op_history``_daily_stats``DAILY_LIMITS``_consecutive_errors``_error_cooldown_until``_reset_daily_stats_if_needed``_check_daily_limit``_increment_stat``_record_error``_clear_error_streak``_is_in_cooldown``_is_in_operating_hours``_get_stats_summary`
#### Scenario: 模块级状态初始化一次
- **WHEN** Python 首次导入 `services/rate_limiter.py`
- **THEN** `_daily_stats``_op_history` 等模块级变量 SHALL 仅初始化一次Python 模块单例语义)
#### Scenario: 每日限额检查正常工作
- **WHEN** `_check_daily_limit("comment")` 被调用
- **THEN** 返回值 SHALL 与迁移前行为完全一致
#### Scenario: 运营时段限制正常工作
- **WHEN** 当前时间不在 `start_hour``end_hour` 范围内时调用 `_is_in_operating_hours`
- **THEN** 返回 `False`,阻止自动化操作执行

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 自动调度器函数迁移至独立模块
系统 SHALL 将调度器相关的状态变量和函数从 `main.py` 提取至 `services/scheduler.py`,包括:`_scheduler_next_times``_auto_log`(列表)、`_auto_log_append``_scheduler_loop``start_scheduler``stop_scheduler``get_auto_log``get_scheduler_status``_learn_running``_learn_scheduler_loop``start_learn_scheduler``stop_learn_scheduler`
#### Scenario: 调度器启停正常工作
- **WHEN** `start_scheduler(...)` 被调用并传入合法参数
- **THEN** 调度器线程 SHALL 正常启动,`get_scheduler_status()` 返回运行中状态
#### Scenario: 日志追加线程安全
- **WHEN** 多个自动化任务并发调用 `_auto_log_append(msg)`
- **THEN** 日志条目 SHALL 正确追加,不丢失和乱序
#### Scenario: engagement 通过回调写日志
- **WHEN** `services/engagement.py` 中的函数需要写日志时
- **THEN** SHALL 通过 `log_fn` 参数(由 `scheduler.py` 传入 `_auto_log_append`)写入,不直接导入 `scheduler.py`

View File

@ -0,0 +1,30 @@
## ADDED Requirements
### Requirement: 剩余 Gradio Tab 提取为独立 UI 模块
系统 SHALL 将 `main.py` 中除 Tab 1已完成之外的 7 个 Gradio Tab 各自提取为 `ui/tab_*.py` 模块文件,每个文件暴露 `build_tab(...)` 函数:
| 模块文件 | Tab 名称 |
|---|---|
| `ui/tab_hotspot.py` | 🔥 热点探测 |
| `ui/tab_engage.py` | 💬 互动运营 |
| `ui/tab_profile.py` | 👤 我的主页 |
| `ui/tab_auto.py` | 🤖 自动运营 |
| `ui/tab_queue.py` | 📅 内容排期 |
| `ui/tab_analytics.py` | 📊 数据分析 |
| `ui/tab_settings.py` | ⚙️ 系统设置 |
#### Scenario: 每个 Tab 模块暴露 build_tab 函数
- **WHEN** `main.py` 执行 `from ui.tab_hotspot import build_tab as build_tab_hotspot`
- **THEN** 调用 `build_tab_hotspot(fn_*, ...)` 后 SHALL 返回包含需跨 Tab 共享组件的 dict
#### Scenario: build_tab 接收回调而非直接导入 services
- **WHEN** `build_tab(...)` 内部需要调用业务函数时
- **THEN** 业务函数 SHALL 通过 `fn_*` 参数传入(与 `tab_create.py` 已有模式一致),不在 `ui/tab_*.py` 内直接 `import services.*`
#### Scenario: 事件绑定在 build_tab 内完成
- **WHEN** `build_tab(...)` 被调用
- **THEN** 本 Tab 所有 Gradio 组件的 `.click()``.change()` 等事件绑定 SHALL 在函数内完成,`main.py` 不保留本 Tab 的事件绑定代码
#### Scenario: main.py 成为纯入口层
- **WHEN** 所有 11 个 capability 均完成迁移后
- **THEN** `main.py` 行数 SHALL 不超过 400 行且不包含任何业务逻辑仅含导入、单例初始化、UI 组装、`app.launch()`

View File

@ -0,0 +1,96 @@
## 1. 基础结构搭建
- [x] 1.1 创建 `services/` 目录及 `services/__init__.py` 空文件
- [x] 1.2 确认 `ui/__init__.py` 已存在(上一轮已创建)
## 2. 迁移 services/rate_limiter.py
- [x] 2.1 创建 `services/rate_limiter.py`,迁移 `_auto_running``_op_history``_daily_stats``DAILY_LIMITS``_consecutive_errors``_error_cooldown_until` 等模块级变量
- [x] 2.2 迁移函数:`_reset_daily_stats_if_needed``_check_daily_limit``_increment_stat``_record_error``_clear_error_streak``_is_in_cooldown``_is_in_operating_hours``_get_stats_summary`
- [x] 2.3 在 `main.py` 中删除对应变量和函数,添加 `from services.rate_limiter import ...`
- [x] 2.4 运行 `ast.parse()` 验证 `main.py``services/rate_limiter.py` 语法正确
## 3. 迁移 services/autostart.py
- [x] 3.1 创建 `services/autostart.py`,迁移 `_APP_NAME``_STARTUP_REG_KEY` 及所有 autostart 函数(`_get_startup_script_path``_get_startup_bat_path``_create_startup_scripts``is_autostart_enabled``enable_autostart``disable_autostart``toggle_autostart`
- [x] 3.2 在 `main.py` 中删除对应代码,添加 `from services.autostart import ...`
- [x] 3.3 运行 `ast.parse()` 验证语法正确
## 4. 迁移 services/persona.py
- [x] 4.1 创建 `services/persona.py`,迁移常量:`DEFAULT_PERSONAS``RANDOM_PERSONA_LABEL``PERSONA_POOL_MAP``DEFAULT_TOPICS``DEFAULT_STYLES``DEFAULT_COMMENT_KEYWORDS`
- [x] 4.2 迁移函数:`_match_persona_pools``get_persona_topics``get_persona_keywords``on_persona_changed``_resolve_persona`
- [x] 4.3 在 `main.py` 中删除对应代码,添加 `from services.persona import ...`
- [x] 4.4 运行 `ast.parse()` 验证语法正确
## 5. 迁移 services/connection.py
- [x] 5.1 创建 `services/connection.py`,迁移函数:`_get_llm_config``connect_llm``add_llm_provider``remove_llm_provider``on_provider_selected`
- [x] 5.2 迁移 SD 相关函数:`connect_sd``on_sd_model_change`
- [x] 5.3 迁移 MCP / 登录相关函数:`check_mcp_status``get_login_qrcode``logout_xhs``_auto_fetch_xsec_token``check_login``save_my_user_id``upload_face_image``load_saved_face_image`
- [x] 5.4 确保所有函数通过参数接收 `cfg``llm``sd``mcp` 等依赖,不在模块顶层初始化单例
- [x] 5.5 在 `main.py` 中删除对应函数,添加 `from services.connection import ...`
- [x] 5.6 运行 `ast.parse()` 验证语法正确
## 6. 迁移 services/profile.py
- [x] 6.1 创建 `services/profile.py`,迁移函数:`_parse_profile_json``_parse_count``fetch_my_profile`
- [x] 6.2 在 `main.py` 中删除对应函数,添加 `from services.profile import ...`
- [x] 6.3 运行 `ast.parse()` 验证语法正确
## 7. 迁移 services/hotspot.py
- [x] 7.1 创建 `services/hotspot.py`,迁移缓存相关:`_cache_lock``_set_cache``_get_cache``_fetch_and_cache``_pick_from_cache`
- [x] 7.2 迁移热点函数:`search_hotspots``analyze_and_suggest``generate_from_hotspot``fetch_proactive_notes``on_proactive_note_selected`
- [x] 7.3 在 `main.py` 中删除对应代码,添加 `from services.hotspot import ...`
- [x] 7.4 运行 `ast.parse()` 验证语法正确
## 8. 迁移 services/content.py
- [x] 8.1 创建 `services/content.py`,迁移函数:`generate_copy``generate_images``one_click_export``publish_to_xhs`
- [x] 8.2 确保 `publish_to_xhs` 的输入验证逻辑和 `finally` 临时文件清理逻辑完整保留
- [x] 8.3 在 `main.py` 中删除对应函数,添加 `from services.content import ...`
- [x] 8.4 运行 `ast.parse()` 验证语法正确
## 9. 迁移 services/engagement.py
- [x] 9.1 创建 `services/engagement.py`,迁移笔记/评论相关函数:`load_note_for_comment``ai_generate_comment``send_comment``fetch_my_notes``on_my_note_selected``fetch_my_note_comments``ai_reply_comment``send_reply`
- [x] 9.2 迁移自动化函数:`auto_comment_once``auto_like_once``auto_favorite_once``auto_reply_once` 及各 `_with_log` 包装
- [x] 9.3 将 `_with_log` 函数改为接收 `log_fn` 回调参数,不直接引用外部 `_auto_log`
- [x] 9.4 在 `main.py` 中删除对应函数,添加 `from services.engagement import ...`
- [x] 9.5 运行 `ast.parse()` 验证语法正确
## 10. 迁移 services/scheduler.py
- [x] 10.1 创建 `services/scheduler.py`,迁移状态变量和日志:`_auto_log``_scheduler_next_times``_auto_log_append`
- [x] 10.2 迁移调度器函数:`_scheduler_loop``start_scheduler``stop_scheduler``get_auto_log``get_scheduler_status`
- [x] 10.3 迁移学习调度器:`_learn_running``_learn_scheduler_loop``start_learn_scheduler``stop_learn_scheduler`
- [x] 10.4 确保 `_scheduler_loop` 调用 `engagement` 函数时传入 `log_fn=_auto_log_append`
- [x] 10.5 在 `main.py` 中删除对应代码,添加 `from services.scheduler import ...`
- [x] 10.6 运行 `ast.parse()` 验证语法正确
## 11. 迁移 services/queue_ops.py
- [x] 11.1 创建 `services/queue_ops.py`,迁移所有 queue 操作函数:`generate_to_queue``_queue_publish_callback``queue_refresh_table``queue_refresh_calendar``queue_preview_item``queue_approve_item``queue_reject_item``queue_delete_item``queue_retry_item``queue_publish_now``queue_start_processor``queue_stop_processor``queue_get_status``queue_batch_approve``queue_generate_and_refresh`
- [x] 11.2 确保 `pub_queue``queue_publisher` 通过参数传入各函数,不在模块顶层初始化
- [x] 11.3 在 `main.py` 中删除对应函数,添加 `from services.queue_ops import ...`;保留 `pub_queue.set_publish_callback(_queue_publish_callback)``main.py` 初始化段调用
- [x] 11.4 运行 `ast.parse()` 验证语法正确
## 12. 拆分 UI Tab 模块
- [x] 12.1 创建 `ui/tab_hotspot.py`,提取 Tab 2🔥 热点探测)的所有 Gradio 组件和事件绑定,暴露 `build_tab(fn_*, ...)` 函数
- [x] 12.2 创建 `ui/tab_engage.py`,提取 Tab 3💬 互动运营)的所有 Gradio 组件和事件绑定
- [x] 12.3 创建 `ui/tab_profile.py`,提取 Tab 4👤 我的主页)的所有 Gradio 组件和事件绑定
- [x] 12.4 创建 `ui/tab_auto.py`,提取 Tab 5🤖 自动运营)的所有 Gradio 组件和事件绑定
- [x] 12.5 创建 `ui/tab_queue.py`,提取 Tab 6📅 内容排期)的所有 Gradio 组件和事件绑定
- [x] 12.6 创建 `ui/tab_analytics.py`,提取 Tab 7📊 数据分析)的所有 Gradio 组件和事件绑定
- [x] 12.7 创建 `ui/tab_settings.py`,提取 Tab 8 系统设置)的所有 Gradio 组件和事件绑定
- [x] 12.8 在 `main.py` 中用相应的 `build_tab(...)` 调用替换各 Tab 代码块,完成后删除空白 Tab 块
- [x] 12.9 运行 `ast.parse()` 验证所有新建 UI 模块语法正确
## 13. 入口层清理与验证
- [x] 13.1 验证 `main.py` 行数不超过 400 行
- [x] 13.2 检查 `main.py` 不包含任何业务逻辑函数定义(除 lambda 内联外)
- [x] 13.3 运行应用 `python main.py`,确认启动无报错
- [x] 13.4 在浏览器中切换所有 Tab确认 UI 正常渲染、事件响应正常

View File

@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-02-26

View File

@ -0,0 +1,97 @@
## Context
当前 `ui/app.py`1369 行)在 `gr.Blocks` 顶部放了一个巨型 `gr.Accordion("⚙️ 全局设置")`,包含 LLM / SD / 换脸 / 系统设置约 100+ 行组件声明。`gr.Tabs()` 紧随其后,首 Tab 实际是「内容创作」(`build_tab` 返回组件字典),后续依次是热点探测、评论管家、账号登录、数据看板、智能学习、自动运营、内容排期。
核心约束:
1. `build_tab()``ui/tab_create.py` 中定义,返回一个组件字典(`res_title`, `res_content`, `res_prompt`, `res_tags`, `quality_mode`, `steps`, `cfg_scale`, `neg_prompt` 等)。字典 **key 不得变更**,否则 `app.py` 中所有 click 绑定会断开。
2. Gradio 组件对象一旦被创建,其引用必须在 `gr.Blocks` 上下文内,不能跨上下文移动。因此「全局组件」(`llm_model`, `sd_model`, `persona`, `status_bar`, `face_swap_toggle`, `face_image_preview`, `sd_url`, `mcp_url`)如需迁移到单独的配置 Tab需在配置 Tab 内创建,然后作为参数传递给 `build_tab()` ——现有接口已这样设计,无需修改函数签名。
3. 自定义 CSS 需注入到 `gr.Blocks(css=...)` 参数Gradio 4.x 支持 `elem_id` / `elem_classes` 定位。
## Goals / Non-Goals
**Goals:**
- 移除顶部全局折叠块,将 LLM / SD / 账号 / 系统设置组件迁移进一个新的「⚙️ 配置」Tab
- Tab 顺序调整为:✍️ 内容创作 → 📅 内容排期 → 🔥 热点探测 → 💬 评论管家 → 📊 数据看板 → 🧠 智能学习 → 🤖 自动运营 → 🔐 账号登录 → ⚙️ 配置
- 内容创作 Tab 内部重构为三栏布局(已在 `tab_create.py` 中实现,补充对齐)
- 自动运营 Tab 改为 2 列卡片式网格
- 注入自定义 CSS字体层级、按钮圆角、卡片阴影、Section 标题线
- 按钮 `variant` 分级统一
**Non-Goals:**
- 不修改任何业务逻辑、事件处理函数或 `services/` 层代码
- 不修改 `build_tab()` 返回的字典 key
- 不引入新的 Python 依赖
- 不重构内容创作 Tab 的内部逻辑(仅布局调整)
## Decisions
### D1全局组件保留在 Blocks 顶层,配置 Tab 仅做视觉容器
**决策**:在 `gr.Tabs()` 内新建「⚙️ 配置」Tab将原折叠块内的所有组件声明 **整体移动** 进该 Tab保持变量名不变
**原因**Gradio 组件必须在声明处确定其 `Block` 归属,无法在声明后再"移动"到别的上下文。将声明移进 Tab 与移进 Accordion 在 Python 语义上等价,所有后续 `.change()` / `.click()` 绑定仍可访问同名变量。
**备选方案**:保留折叠块但折叠状态改为始终折叠(`open=False`)——被否,未从根本上减少首屏干扰。
### D2Tab 顺序重排策略
**决策**:按"使用频率"重排,高频在前:
```
0 ✍️ 内容创作 (主工作流起点)
1 📅 内容排期 (发布管理)
2 🔥 热点探测 (灵感来源)
3 💬 评论管家 (互动运营)
4 📊 数据看板 (查看结果)
5 🧠 智能学习 (后台任务)
6 🤖 自动运营 (定时任务)
7 🔐 账号登录 (一次性操作)
8 ⚙️ 配置 (初始化/低频)
```
**原因**:账号登录和配置均属初始设置,完成后几乎不再访问,放末尾减少对核心工作流的干扰。
### D3自动运营 Tab — 2 列 Group 卡片而非 CSS Grid
**决策**:用 `gr.Row()` + `with gr.Column(scale=1)` 实现等宽两列,每个任务块用 `gr.Group()` 包裹Gradio 原生组合组件,有边框/背景色),而非依赖纯 CSS grid。
**原因**Gradio 4.x 的 `gr.Group` 提供开箱即用的视觉分组,无需 `elem_id` + CSS 精确定位可靠性更高。CSS 仅用于微调阴影/圆角,不承担主布局责任。
**备选方案**:全用 CSS grid + `elem_classes` ——被否Gradio 的 CSS 隔离机制容易与内部 shadow DOM 冲突,维护成本高。
### D4CSS 注入范围——最小化
**决策**CSS 只覆盖:
1. `body` 字体 (Inter / 系统字体栈)
2. `.btn-primary` 圆角和阴影
3. `.gr-group` 卡片阴影
4. Section 分隔标题(`hr` + `.section-title` class
不覆盖 Gradio 内部组件(如 `textarea`, `input`),避免版本升级破坏样式。
### D5内容创作 Tab 三栏布局 — 在 tab_create.py 中调整
**决策**:在 `build_tab()` 内部,将现有的垂直堆叠改为 `gr.Row()` 包裹三个 `gr.Column(scale=3/4/3)`
- 左栏:人设选择、话题/风格、生成参数
- 中栏文案输出title/content/tags+ 文案操作按钮
- 右栏:图片预览 gallery + 图片操作按钮(含美化强度滑块)
**原因**`build_tab()` 自包含,且组件字典 key 不变,`app.py` 侧零改动。
## Risks / Trade-offs
| 风险 | 缓解措施 |
|------|----------|
| 全局组件迁入 Tab 后,首次切换到配置 Tab 时才初始化Gradio lazy render | Gradio 4.x 默认会在页面加载时渲染所有 Tab非 lazy风险低 |
| CSS 注入与 Gradio 主题冲突 | 仅用 `:root` 变量覆盖和有命名空间的选择器,避免直接覆盖 Gradio 内部类 |
| `build_tab()` 三栏改造后在窄屏(<1200px下挤压 | Row 内设置 `wrap=True`Gradio 支持或通过 CSS media query 处理 |
| 自动运营 Tab 两列卡片在小屏幕下折叠 | 同上,添加 `@media` 断点 CSS |
## Migration Plan
1. **不需要数据迁移**——纯 UI 代码变更
2. **部署步骤**:替换 `ui/app.py``ui/tab_create.py` 后重启应用即生效
3. **回滚**git revert 即可,无状态变更
## Open Questions
- **无**——所有技术决策已确定,可直接进入任务拆解

View File

@ -0,0 +1,28 @@
## Why
当前 UI 整体采用线性堆叠结构,全局设置塞入折叠区、各 Tab 内信息密度不均匀、核心创作流程(内容创作 Tab缺乏明确的视觉分区导致新用户上手成本高、高频操作路径长、页面滚动量大。随着功能不断增加亟需一次系统性的布局优化以提升可用性和美观度。
## What Changes
- **全局设置栏重构**:将分散在折叠区的 LLM / SD / 小红书账号三大配置项拆分为独立的「⚙️ 配置」Tab从主界面顶部移除折叠块减少首屏干扰
- **Tab 导航顺序优化**:将用户最高频的「✍️ 内容创作」Tab 置于首位Tab 0次高频的「📅 内容排期」置于第二位;低频的「🔐 账号登录」和「⚙️ 配置」移至末尾
- **内容创作 Tab 三栏布局**:左栏(参数配置) | 中栏(文案预览/编辑) | 右栏(图片预览/操作),三栏比例 3:4:3高频操作一屏可见无需滚动
- **自动运营 Tab 面板化**:将开关密集的单列改为卡片式 2×N 网格,每个自动化任务独立成卡,含开关、间隔、上次运行时间三要素
- **统一视觉语言**:为操作按钮分级(主操作 variant="primary" / 次操作 variant="secondary" / 危险操作 variant="stop"),关键区域添加分隔线和小标题
- **新增 CSS 主题层**:在 `gr.Blocks(css=...)` 注入自定义 CSS优化字体层级、按钮圆角、卡片阴影
## Capabilities
### New Capabilities
- `ui-global-config-tab`: 将全局设置LLM / SD / 账号)迁移到独立 Tab含完整的连接状态显示
### Modified Capabilities
- `ui-tabs-split`: Tab 顺序和标题变更——内容创作置首位,新增配置 Tab移除顶部全局配置折叠区
- `ui-module-split`: 内容创作 Tab 改为三栏式布局;自动运营 Tab 改为卡片网格
## Impact
- **直接修改文件**`ui/app.py``ui/tab_create.py`
- **潜在影响**`services/` 中事件绑定通过组件引用传参,布局变更不影响逻辑;但 `tab_create.py` 返回的组件字典 key 不得变更,否则 `app.py` 的 click 绑定会断开
- **无外部 API / 依赖变更**
- **无 breaking change**(所有现有功能保留,仅调整位置和视觉样式)

View File

@ -0,0 +1,20 @@
## ADDED Requirements
### Requirement: 独立配置 Tab 承载全局设置
系统 SHALL 在主 Tab 列表末尾提供「⚙️ 配置」Tab将 LLM 提供商配置、SD WebUI 配置、ReActor 换脸设置、系统设置(开机自启等)全部组件声明移入该 Tab从主界面顶部移除 `gr.Accordion("⚙️ 全局设置")` 折叠块。
#### Scenario: 首屏无全局设置折叠块
- **WHEN** 用户打开应用
- **THEN** 主界面顶部 SHALL 不再显示任何折叠区块,直接呈现 Tab 导航栏
#### Scenario: 配置 Tab 包含所有原折叠区内容
- **WHEN** 用户切换到「⚙️ 配置」Tab
- **THEN** Tab 内 SHALL 包含 LLM 提供商 Dropdown、LLM 模型 Dropdown、添加/删除提供商面板、MCP Server URL、SD WebUI URL、连接/检查按钮、SD 模型、博主人设、AI 换脸(头像上传 + 开关)、开机自启开关,功能与原折叠区完全一致
#### Scenario: 跨 Tab 共享组件可正常访问
- **WHEN** 「内容创作」等其他 Tab 需要使用 `llm_model``sd_model``persona``status_bar``face_swap_toggle` 等组件
- **THEN** 这些组件 SHALL 在 `gr.Blocks` 上下文中声明,并作为参数传递给各 `build_tab()` 函数,功能不受 Tab 物理位置影响
#### Scenario: 连接状态实时反馈
- **WHEN** 用户在配置 Tab 点击「🔗 连接 LLM」或「🎨 连接 SD」
- **THEN** `status_bar` Markdown 组件 SHALL 实时更新显示连接结果,与原折叠区行为一致

View File

@ -0,0 +1,49 @@
## MODIFIED Requirements
### Requirement: 内容创作 Tab 的 UI 代码迁移至独立模块
`ui/tab_create.py` SHALL 包含「内容创作 Tab」的全部 Gradio 组件定义和事件绑定,并导出 `build_tab(...) -> dict` 函数,返回跨 Tab 共享组件的字典key 集合不得变更)。
内容创作 Tab 内部 SHALL 采用**三栏布局**
- **左栏scale=3**:人设选择、话题/风格、文案生成参数(高级设置 Accordion
- **中栏scale=4**:文案输出区(标题、正文、标签、提示词)+ 文案操作按钮组
- **右栏scale=3**:图片预览 Gallery + 图片操作按钮组(含美化强度滑块)
三栏通过 `gr.Row()` 包裹三个 `gr.Column(scale=...)` 实现,所有高频操作 SHALL 在无需垂直滚动的情况下可见≥1280px 宽度分辨率)。
#### Scenario: main.py/app.py 正常启动并显示内容创作 Tab
- **WHEN** 运行应用启动 Gradio
- **THEN** 内容创作 Tab 正常显示三栏布局,所有组件与迁移前功能一致
#### Scenario: tab_create 模块可独立导入
- **WHEN** 在 Python 中执行 `from ui.tab_create import build_tab`
- **THEN** 不抛出任何导入错误,`build_tab` 为可调用对象
#### Scenario: 三栏布局在宽屏下无需滚动
- **WHEN** 用户在 ≥1280px 宽度的浏览器中打开内容创作 Tab
- **THEN** 左栏参数、中栏文案输出、右栏图片预览 SHALL 同时可见,无需垂直滚动
#### Scenario: build_tab 返回字典 key 保持不变
- **WHEN** `build_tab(...)` 被调用并返回字典
- **THEN** 返回字典 SHALL 至少包含 `res_title``res_content``res_prompt``res_tags``quality_mode``steps``cfg_scale``neg_prompt` 等原有 key
### Requirement: ui/ 目录结构规范
`ui/` 目录 SHALL 包含 `__init__.py`,每个 Tab 模块文件命名约定为 `tab_<name>.py`,不在 Tab 模块中直接调用全局服务初始化代码。
#### Scenario: 新增 Tab 模块的标准结构
- **WHEN** 开发者创建新的 `ui/tab_*.py` 文件
- **THEN** 该文件导出 `build_tab(...)` 函数,且顶层不包含副作用代码
### Requirement: 自动运营 Tab 采用卡片式两列网格布局
「🤖 自动运营」Tab 内的各自动化任务 SHALL 以 **2 列 × N 行** 的卡片网格展示,每个任务使用 `gr.Group()` 包裹,卡片内 SHALL 包含:任务名称标题、启用开关(`gr.Checkbox`)、执行间隔(`gr.Number``gr.Slider`)、上次执行时间(`gr.Markdown`)。
#### Scenario: 自动运营任务以两列网格显示
- **WHEN** 用户切换到「🤖 自动运营」Tab
- **THEN** 各自动化任务自动评论、自动点赞、自动收藏、自动发布、自动回复等SHALL 以两列卡片网格排列,每列宽度相等
#### Scenario: 每张卡片包含完整任务控制
- **WHEN** 用户查看某个任务卡片
- **THEN** 卡片内 SHALL 显示任务开关、执行间隔设置项,功能与原单列布局完全一致
#### Scenario: 自定义 CSS 增强视觉效果
- **WHEN** 应用加载完成
- **THEN** `gr.Blocks(css=...)` SHALL 注入自定义样式包含字体层级优化、按钮圆角≥6px`gr.Group` 卡片轻微阴影(`box-shadow`),不破坏 Gradio 内部组件样式

View File

@ -0,0 +1,38 @@
## MODIFIED Requirements
### Requirement: 剩余 Gradio Tab 提取为独立 UI 模块
系统 SHALL 将 `ui/app.py` 中所有 Gradio Tab 按如下顺序排列,且顶部 SHALL 不存在任何全局设置折叠块(`gr.Accordion`
| 序号 | Tab 名称 | 模块/说明 |
|------|----------|-----------|
| 0 | ✍️ 内容创作 | `ui/tab_create.py` |
| 1 | 📅 内容排期 | 内联或 `ui/tab_queue.py` |
| 2 | 🔥 热点探测 | 内联或 `ui/tab_hotspot.py` |
| 3 | 💬 评论管家 | 内联或 `ui/tab_engage.py` |
| 4 | 📊 数据看板 | 内联或 `ui/tab_analytics.py` |
| 5 | 🧠 智能学习 | 内联或 `ui/tab_learn.py` |
| 6 | 🤖 自动运营 | 内联或 `ui/tab_auto.py` |
| 7 | 🔐 账号登录 | 内联或 `ui/tab_profile.py` |
| 8 | ⚙️ 配置 | 内联(含原全局设置所有组件) |
每个 Tab 模块 SHALL 暴露 `build_tab(...)` 函数,接受所需组件引用和回调函数作为参数。
#### Scenario: 每个 Tab 模块暴露 build_tab 函数
- **WHEN** `ui/app.py` 执行 `from ui.tab_create import build_tab`
- **THEN** 调用 `build_tab(...)` 后 SHALL 返回包含需跨 Tab 共享组件的 dict
#### Scenario: build_tab 接收回调而非直接导入 services
- **WHEN** `build_tab(...)` 内部需要调用业务函数时
- **THEN** 业务函数 SHALL 通过 `fn_*` 参数传入,不在 `ui/tab_*.py` 内直接 `import services.*`
#### Scenario: 事件绑定在 build_tab 内完成
- **WHEN** `build_tab(...)` 被调用
- **THEN** 本 Tab 所有 Gradio 组件的 `.click()``.change()` 等事件绑定 SHALL 在函数内完成
#### Scenario: 内容创作 Tab 为首个 Tab索引 0
- **WHEN** 用户打开应用
- **THEN** 默认激活的 Tab SHALL 为「✍️ 内容创作」,用户无需额外点击即可开始创作工作流
#### Scenario: 配置和账号 Tab 位于末尾
- **WHEN** 用户查看 Tab 导航栏
- **THEN** 「🔐 账号登录」SHALL 位于倒数第二位,「⚙️ 配置」SHALL 位于最末位,低频操作不干扰主工作区

View File

@ -0,0 +1,44 @@
## 1. CSS 主题层注入
- [x] 1.1 在 `ui/app.py` 顶部定义 `_GRADIO_CSS` 字符串常量内容包含正文字体栈Inter/system-ui、按钮圆角border-radius 6px`gr.Group` 轻阴影box-shadow
- [x] 1.2 将 `gr.Blocks(title=...)` 改为 `gr.Blocks(title=..., css=_GRADIO_CSS)` 以注入自定义样式
- [x] 1.3 验证应用启动后样式生效按钮无变形内部组件textarea/input样式不被破坏
## 2. 全局设置迁移至「⚙️ 配置」Tab
- [x] 2.1 删除 `ui/app.py` 中顶部的 `with gr.Accordion("⚙️ 全局设置 (自动保存)", open=False):` 折叠块(约第 79-186 行)
- [x] 2.2 将所有全局组件声明(`llm_provider``llm_model``btn_connect_llm``sd_url``sd_model``mcp_url``persona``face_image_input``face_image_preview``face_swap_toggle``status_bar` 等)整体移入新「⚙️ 配置」Tab 内
- [x] 2.3 在 `gr.Tabs()` 末尾新增 `with gr.Tab("⚙️ 配置"):` 并将步骤 2.2 的组件放入,保留所有变量名和事件绑定不变
- [x] 2.4 验证「⚙️ 配置」Tab 内所有组件正常显示LLM 连接、SD 连接、换脸头像上传功能正常
- [x] 2.5 验证「内容创作」等其他 Tab 中使用 `llm_model``sd_model``persona``status_bar` 的事件绑定仍正常工作
## 3. Tab 顺序重排
- [x] 3.1 调整 `ui/app.py``gr.Tabs()` 内各 Tab 的声明顺序为:⚙️ 配置首位selected=1 默认激活内容创作) ✍️ 内容创作 📅 内容排期 🔥 热点探测 💬 评论管家 📊 数据看板 🧠 智能学习 🤖 自动运营 🔐 账号登录
- [x] 3.2 验证应用启动默认激活 Tab 为「✍️ 内容创作」(通过 selected=1 实现)
## 4. 内容创作 Tab 三栏布局调整
- [x] 4.1 在 `ui/tab_create.py` 中,将三个 `gr.Column` 的比例改为 `scale=3`(左栏)、`scale=4`(中栏)、`scale=3`(右栏)
- [x] 4.2 左栏scale=3包含参数配置人设、话题、风格、生成按钮等
- [x] 4.3 中栏scale=4包含文案输出标题、正文、标签、提示词 Textbox
- [x] 4.4 右栏scale=3包含图片预览及图片操作按钮
- [x] 4.5 验证 1280px 宽度下三栏同时可见,无需垂直滚动
## 5. 自动运营 Tab 调度卡片网格重构
- [x] 5.1 在「🤖 自动运营」Tab 右栏(定时自动化)中,将垂直堆叠的 5 个 `gr.Group` 改为 3 行 2 列网格3 个 `gr.Row()`,每行两个 `gr.Column(scale=1)` 包裹卡片);右栏 scale 扩大至 2
- [x] 5.2 每个调度卡片的 `gr.Group` 内增加 `gr.Markdown("##### 任务名")` 小标题,卡片视觉更清晰
- [x] 5.3 验证 5 个定时调度卡片的开关、间隔设置、启动/停止按钮功能正常
## 6. 按钮 variant 分级统一
- [x] 6.1 检查 `ui/app.py``ui/tab_create.py` 中所有 `gr.Button`:主操作使用 `variant="primary"`(连接/生成/启动),删除/停止/危险操作使用 `variant="stop"``btn_del_provider``btn_logout``btn_queue_stop``btn_queue_delete``btn_clear_log``btn_learn_stop``btn_stop_sched``btn_queue_reject`),次要操作不设 variant 或使用默认
- [x] 6.2 确认「⚙️ 配置」Tab 内的「🗑️ 删除当前提供商」按钮使用 `variant="stop"`
## 7. 回归验证
- [ ] 7.1 启动应用,确认首屏直接显示「✍️ 内容创作」Tab无顶部折叠块
- [ ] 7.2 完整走通「文案生成 图片生成 发布」流程,验证所有功能正常
- [ ] 7.3 切换到「⚙️ 配置」Tab连接 LLM 和 SD确认 `status_bar` 正常更新
- [ ] 7.4 切换到「🤖 自动运营」Tab检查调度卡片网格布局正常执行一次单次任务

View File

@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-02-25

View File

@ -0,0 +1,86 @@
## Context
当前系统使用 Stable Diffusion WebUI API 生成图片,通过 `sd_service.py` 中的 `SD_MODEL_PROFILES` 静态配置各模型的参数预设(快速/标准/精细三档)和 prompt 增强词。图片生成后经 `anti_detect_postprocess()` 做反 AI 检测后处理。`llm_service.py``get_sd_prompt_guide()` 向 LLM 提供绘图 Prompt 写作指南。
**现状痛点:**
- 各模型的"精细"档步数35-40步和分辨率偏低SDXL 模型未充分利用 Hires Fix 提升细节
- `prompt_prefix` 中的中国人面孔特征词偏弱或措辞泛化(如 `asian girl` 无权重),欧美面孔偏向风险未完全规避
- `anti_detect_postprocess` 将噪声扰动与美化处理混在一起,无法独立调节美化强度
- LLM Prompt 指南缺少针对人物真实感/中国审美的专项指导词模板
**约束:**
- 不引入新的外部 Python 依赖
- Hires Fix 仅对 SDXL 模型启用SD 1.5 显存不足)
- 现有 `txt2img` API 接口签名不做 breaking change新参数均使用默认值向后兼容
## Goals / Non-Goals
**Goals:**
- 为各模型新增"高画质"预设档SDXL 启用 Hires Fix 参数
- 系统性强化各模型 `prompt_prefix` 中的中国人审美特征词(精致五官、白皙透亮、杏眼、减龄感、中式气质)
- 将 `anti_detect_postprocess` 拆分为独立的 `beauty_enhance()` + `anti_detect_postprocess()` 两段,支持通过 `enhance_level` 参数控制美化强度
- 扩充 LLM Prompt 指南中的人物真实感描述规则,加入中国审美专项指导词模板
- 在 UI 绘图参数面板新增"美化强度"控件,通过 `services/content.py` 传递至后处理管线
**Non-Goals:**
- 不新增 SD 模型支持(不修改模型识别逻辑)
- 不做图片超分Real-ESRGAN 等外部工具)
- 不修改 ReActor 换脸逻辑
- 不改变发布流程或队列逻辑
## Decisions
### 决策 1Hires Fix 仅对 SDXL 模型启用
**选择:** 在 `txt2img` payload 中,当模型 `arch == "sdxl"``quality_mode` 为高画质档时注入 Hires Fix 参数(`enable_hr: true`, `hr_scale: 1.5`, `hr_upscaler: "4x-UltraSharp"`, `denoising_strength: 0.4`)。
**理由:** SD 1.5 模型显存通常不足以做 Hires FixSDXL 的 832×1216 基础分辨率放大 1.5x 可达 1248×1824大幅提升皮肤/五官细节,代价是生成时间增加约 50%。
**替代方案:** 后处理做 PIL 超分 → 效果差、无 AI 重绘细节;全模型启用 → 触发 SD 1.5 OOM 风险。
---
### 决策 2beauty_enhance 作为独立函数enhance_level 控制强度
**选择:** 将原 `anti_detect_postprocess` 中的美化逻辑(锐化、色彩增强)抽取为 `beauty_enhance(img, level: float = 1.0) -> Image``level=0` 跳过,`level=1` 为默认,`level=2` 为强化。`anti_detect_postprocess` 保持仅做扰动逻辑,调用顺序:`beauty_enhance → anti_detect_postprocess`
**理由:** 当前两者混在同一函数中,美化参数硬编码无法从 UI 调节;分离后可独立测试,且 UI 控件可直接映射到 `enhance_level`
**替代方案:** 在 `anti_detect_postprocess` 中增加 flag 参数 → 函数职责混乱,不符合单一职责原则。
---
### 决策 3中国审美词以"权重词组"方式注入 prompt_prefix
**选择:** 在各模型 `prompt_prefix` 中为中国人面孔特征词加入权重括号,如 `(almond eyes:1.2)`, `(delicate nose:1.1)`, `(porcelain skin:1.2)`, `(youthful appearance:1.1)`, `(chinese beauty:1.2)`,同时在 `negative_prompt` 中补充 `deep-set eyes, strong jawline, prominent brow ridge` 等欧美特征排除词。
**理由:** SD 权重词是最直接有效的方式;不引入新依赖;各模型独立配置可针对其风格微调权重值。
---
### 决策 4LLM Prompt 指南新增中国审美人物描述模板
**选择:** 在 `get_sd_prompt_guide()` 中新增"人物描述词库"段落,按五官/肤色/气质/发型四个维度分别给出推荐词,并加一条规则:描述人物时优先从该词库选词,禁用 `beautiful girl` 等无导向泛化词。
**理由:** LLM 倾向于使用泛化美感词,不加约束时易生成欧美审美偏向的描述,规则化词库可强制对齐目标审美。
## Risks / Trade-offs
- **[风险] Hires Fix 生成时间大幅增加** → 缓解:仅在新增"极致 (约5分钟)"档才默认开启其他档保持原速UI 标注预计用时
- **[风险] 中国审美词过强导致所有图片千人一面** → 缓解:权重控制在 1.1-1.2 之间,不超过 1.3`PERSONA_SD_PROFILES` 各人设保留差异化风格词
- **[风险] beauty_enhance 引入 numpy 依赖(实际已存在)** → 无风险,现有代码已 `import numpy`
- **[Trade-off] enhance_level UI 控件增加 UI 复杂度** → 接受:放在"高级设置"面板折叠区,不影响主流程
## Migration Plan
1. 修改 `sd_service.py`:更新各模型 `prompt_prefix`/`negative_prompt`,新增高画质预设,拆分 `beauty_enhance`
2. 修改 `llm_service.py`:更新 `get_sd_prompt_guide()` 指南内容
3. 修改 `services/content.py``generate_images` 新增 `enhance_level` 参数(默认 `1.0`,向后兼容)
4. 修改 `ui/tab_create.py`:在"高级设置"中新增"美化强度"滑块,范围 0.0-2.0,默认 1.0
无数据库迁移无配置文件格式变更。回滚git revert 即可。
## Open Questions
- Hires Fix 的 `hr_upscaler` 在用户环境中是否普遍可用(需确认 "4x-UltraSharp" 模型是否已下载);若不存在应降级为 "Latent" 作为 fallback
- `beauty_enhance` 的最优参数值(锐化半径、色彩增益系数)需在实际出图中调试确认

View File

@ -0,0 +1,31 @@
## Why
当前图片生成管线存在两个核心问题:一是 SD 参数预设偏保守步数少、CFG 偏低LLM 绘图 Prompt 对真实感与美感的引导词不够精准,后处理未做美化增强直接引入扰动,导致出图偏平、细节不足;二是人物面孔与气质的正向引导词缺乏针对中国审美的专项优化(精致五官、白皙透亮肤色、杏眼桃花眼、减龄气质、中式温柔感等小红书目标用户偏好特征),导致生成人物不够符合用户审美认同。现在正是系统内容数量稳定增长、需要提升质量口碑与用户共鸣的时机。
## What Changes
- 新增"高画质"生成预设提升各模型的步数、采样器DPM++ 2M SDE Karras 等),增加 Hires Fix / 放大修复支持
- 优化 LLM 生成绘图 Prompt 的系统指导词,注入更多真实感/光影/皮肤细节关键词,区分人物与场景图风格指导
- 在各模型 `prompt_prefix``PERSONA_SD_PROFILES` 中系统性强化中国人审美特征词(精致五官、白皙透亮、杏眼/桃花眼、减龄感、中式气质),并剔除容易偏向欧美面孔的泛化词
- 将后处理管线拆解为"美化增强"(锐化、肤色修正、色彩提升)→ "反 AI 检测扰动"两个独立阶段,确保美化先于扰动执行
- 在 `services/content.py``generate_images` 调用链中暴露"美化强度"参数,传递至后处理管线
## Capabilities
### New Capabilities
- `image-quality-presets`:各 SD 模型的高画质预设参数高步数、高分辨率、Hires Fix 支持),以及预设选择逻辑优化
- `image-post-enhancement`:生成后专用美化增强管线(智能锐化、肤色色调微调、饱和度提升),在反 AI 扰动之前执行
- `llm-prompt-aesthetics`LLM 绘图 Prompt 生成指南的美感/真实感强化规则,针对人物与场景分别给出增强词模板
- `chinese-aesthetic-profile`:针对中国审美偏好的人物面孔与气质关键词体系,包含模型级 `prompt_prefix` 增强词库和各人设档位的面孔风格词,覆盖"精致五官、白皙透亮、减龄感、中式气质"等维度
### Modified Capabilities
(无现有 spec 涉及图片生成质量,无需修改现有 spec
## Impact
- **`sd_service.py`**`SD_MODEL_PROFILES` 各模型的 `presets` 字段(新增高画质档)、`prompt_prefix`/`negative_prompt`(中国审美词优化)、`PERSONA_SD_PROFILES` 各人设的 `prompt_boost`(面孔风格词强化)、`txt2img` 方法Hires Fix 参数注入)、`anti_detect_postprocess`(拆分为 beauty_enhance + anti_detect 两段)
- **`llm_service.py`**`get_sd_prompt_guide()` 方法中的系统指导词内容(含中国审美人物描述指南)
- **`services/content.py`**`generate_images()` 函数签名新增 `enhance_level` 参数
- **`ui/tab_create.py`**:绘图参数面板新增"美化强度"滑块控件

View File

@ -0,0 +1,35 @@
## ADDED Requirements
### Requirement: 各 SD 模型 prompt_prefix 包含中国审美面孔特征词
系统 SHALL 在 `SD_MODEL_PROFILES` 每个模型的 `prompt_prefix` 中包含以下中国审美特征词(具体权重值由各模型风格微调,但不得省略):
- 眼型:`(almond eyes:1.2)``(delicate almond-shaped eyes:1.2)`
- 肤色:`(porcelain skin:1.2)``(fair porcelain skin:1.2)``(luminous fair skin:1.2)`
- 五官精致感:`(delicate facial features:1.2)``(refined features:1.2)`
- 气质:`(youthful appearance:1.1)``(elegant temperament:1.1)`
#### Scenario: 生成的人物图像呈现中国大众偏好的精致感
- **WHEN** 使用任意 SD 模型生成含人物的图片
- **THEN** `prompt_prefix` 中包含杏眼、白皙肤色、精致五官三类词汇中各至少一个
### Requirement: 各 SD 模型 negative_prompt 补充欧美面孔排除词
系统 SHALL 在 `SD_MODEL_PROFILES` 每个模型的 `negative_prompt` 中包含以下欧美面孔特征排除词(在现有基础上补充):
- `strong jawline``prominent brow ridge``deep-set eyes`(如已存在则保留权重)
- `angular facial structure``square jaw``heavy brow`
#### Scenario: negative_prompt 明确排除欧美面孔结构特征
- **WHEN** 任意模型进行人物图片生成
- **THEN** negative_prompt 中包含 `strong jawline``prominent brow ridge``deep-set eyes` 至少两个排除词
### Requirement: PERSONA_SD_PROFILES 各人设包含中式气质差异化增强词
系统 SHALL 在 `PERSONA_SD_PROFILES` 中,每个人设的 `prompt_boost` 包含至少 1 个与中国审美相关的气质词,且各人设之间的气质词须有差异化区分(如:甜妹-减龄感、知性-优雅气质、赛博博主-精致感 + 未来感)。
#### Scenario: 不同人设生成的人物具有可感知的气质差异
- **WHEN** 分别以"甜妹"和"知性"人设各生成一批图片
- **THEN** 两批图片中 `prompt_boost` 的核心气质词不重复,体现差异化风格
### Requirement: 美化增强对肤色偏向做修正
`beauty_enhance()` 函数 SHALL 在饱和度提升时对色调偏暖方向微调(暖白/自然肤色),避免饱和度提升导致肤色偏黄或偏红。具体实现通过调整 PIL 色调Hue向暖白方向 ±5° 以内微调。
#### Scenario: 美化后肤色不发黄不发红
- **WHEN** 调用 `beauty_enhance(img, level=1.0)` 对含人物的图片处理
- **THEN** 输出图片肤色区域饱和度提升的同时,色调保持在自然暖白范围内(目视无偏黄/偏红现象)

View File

@ -0,0 +1,38 @@
## ADDED Requirements
### Requirement: beauty_enhance 作为独立美化增强函数
系统 SHALL 在 `sd_service.py` 中提供 `beauty_enhance(img: Image.Image, level: float = 1.0) -> Image.Image` 函数,支持以下增强操作(所有操作的强度随 `level` 线性缩放):
- 智能锐化(基于 `ImageFilter.UnsharpMask`,强调五官轮廓与发丝细节)
- 亮度与对比度微增(`level=1.0` 时各 +2-3%`level=2.0` 时各 +4-6%
- 饱和度提升(`level=1.0` 时 +5%`level=2.0` 时 +10%,令肤色更均匀饱满)
- `level=0` 时 SHALL 直接返回原图,跳过所有处理
#### Scenario: 正常调用增强管线
- **WHEN** 调用 `beauty_enhance(img, level=1.0)`
- **THEN** 返回经过锐化、亮度微调、饱和度提升处理的 PIL Image图片尺寸不变
#### Scenario: level=0 时跳过处理
- **WHEN** 调用 `beauty_enhance(img, level=0)`
- **THEN** 直接返回原始 img 对象,不执行任何增强操作
#### Scenario: level=2 时增强效果加倍
- **WHEN** 调用 `beauty_enhance(img, level=2.0)`
- **THEN** 锐化、亮度、饱和度的调整幅度均为 level=1.0 时的 2 倍
### Requirement: 后处理管线顺序为美化先于反 AI 扰动
系统 SHALL 在 `txt2img``img2img` 生成流程中,对每张输出图片依次执行:`beauty_enhance(img, level) → anti_detect_postprocess(img)`,确保美化增强在扰动引入之前完成。
#### Scenario: 生成图片经过完整两阶段后处理
- **WHEN** `txt2img` 成功生成图片
- **THEN** 每张图片先经过 `beauty_enhance`,再经过 `anti_detect_postprocess`,最终返回给调用方
### Requirement: enhance_level 参数从 UI 传递至后处理管线
系统 SHALL 支持 `enhance_level: float` 参数从 Gradio UI 经 `services/content.py``generate_images()` 函数传递至 `SDService.txt2img()`,最终传入 `beauty_enhance()`。新参数默认值为 `1.0`,向后兼容。
#### Scenario: UI 美化强度滑块值传递到生成结果
- **WHEN** 用户在"高级设置"中将美化强度滑块调整为 2.0 并点击生成
- **THEN** 生成图片经过 `beauty_enhance(img, level=2.0)` 处理
#### Scenario: 旧调用方不传 enhance_level 时行为不变
- **WHEN** `generate_images()` 未传入 `enhance_level` 参数
- **THEN** 默认使用 `level=1.0`,行为与优化前相同

View File

@ -0,0 +1,23 @@
## ADDED Requirements
### Requirement: 各 SD 模型新增高画质预设档
系统 SHALL 在 `SD_MODEL_PROFILES` 的每个模型 `presets` 字典中新增 `"高画质 (约5分钟)"` 档位参数须满足SD 1.5 模型步数 ≥ 50、CFG 6.5-7.5、采样器为 `DPM++ SDE`SDXL 模型步数 ≥ 40、CFG 5.5-6.5、采样器为 `DPM++ 2M SDE`,并启用 Hires Fix 参数(`enable_hr: true`)。
#### Scenario: 用户选择高画质档后生成请求包含 Hires Fix
- **WHEN** 用户将质量模式切换为"高画质 (约5分钟)"且当前模型为 SDXL 架构
- **THEN** `txt2img` API 请求 payload 中包含 `enable_hr: true``hr_scale: 1.5``hr_upscaler` 字段
#### Scenario: SD 1.5 模型高画质档不启用 Hires Fix
- **WHEN** 用户将质量模式切换为"高画质 (约5分钟)"且当前模型 `arch == "sd15"`
- **THEN** payload 中不含 `enable_hr` 字段,以避免 OOM
#### Scenario: Hires Fix upscaler 不存在时降级
- **WHEN** `hr_upscaler` 指定的放大器模型(如 "4x-UltraSharp")在 SD WebUI 中不可用
- **THEN** 系统 SHALL 自动降级为 `"Latent"` 作为 fallback并通过日志记录警告
### Requirement: 预设名称在 UI 中按模型动态更新
系统 SHALL 在用户切换 SD 模型后,更新 UI 质量模式单选按钮的选项列表,使其始终反映当前模型的可用预设档名称。
#### Scenario: 切换模型后预设列表刷新
- **WHEN** 用户在连接设置中切换 SD 模型
- **THEN** 创作页"生成模式"单选按钮的选项 SHALL 更新为该模型 `presets` 中的键名列表

View File

@ -0,0 +1,29 @@
## ADDED Requirements
### Requirement: LLM Prompt 指南包含人物真实感描述规则
`get_sd_prompt_guide()` 返回的指南 SHALL 包含"人物描述规则"段落,内容要求:
- 明确禁止使用 `beautiful girl``pretty woman``good looking` 等无导向泛化词
- 要求从五官(眼型、鼻型、唇型)、肤色(白皙、透亮、均匀)、气质(减龄、温柔、知性)三个维度分别提供具体描述词
- 对于有人物的 Prompt要求至少包含 2 个五官/肤色词和 1 个气质词
#### Scenario: LLM 生成含人物的 Prompt 时使用规范词汇
- **WHEN** LLM 根据指南生成包含人物描述的 SD Prompt
- **THEN** Prompt 中不含 `beautiful``pretty``good looking` 等泛化词,而是包含如 `almond eyes``porcelain skin``youthful appearance` 等具体描述词
### Requirement: LLM Prompt 指南区分人物图与纯场景图的写作策略
`get_sd_prompt_guide()` 返回的指南 SHALL 明确区分两种写作模式:人物主体图(需详细描述人物特征)和纯场景/物品图(无需人物词,优先描述氛围、色彩、光线)。
#### Scenario: 生成场景图 Prompt 时不误加人物词
- **WHEN** 文案主题为室内布置或产品展示(无人物)
- **THEN** LLM 生成的 Prompt 中不包含人物特征描述词
#### Scenario: 生成人物图 Prompt 时包含完整人物描述
- **WHEN** 文案主题为穿搭、美妆或生活场景(含人物)
- **THEN** LLM 生成的 Prompt 中包含五官、肤色、气质三个维度的描述词各至少 1 个
### Requirement: LLM Prompt 指南包含光影与摄影感描述规范
`get_sd_prompt_guide()` 返回的指南 SHALL 包含光影描述规则,要求生成 Prompt 时指定具体光源类型(如 `soft window light``golden hour``studio lighting`)而非泛化词(如 `good lighting`)。
#### Scenario: Prompt 中包含具体光影词而非泛化词
- **WHEN** LLM 根据指南生成 SD Prompt
- **THEN** Prompt 中包含 `soft window light``golden hour``studio lighting``diffused natural light` 等具体光影词之一

View File

@ -0,0 +1,50 @@
## 1. sd_service.py — 中国审美词优化
- [x] 1.1 更新 `majicmixRealistic` 模型 `prompt_prefix`:添加 `(almond eyes:1.2)``(porcelain skin:1.2)``(youthful appearance:1.1)`,确保 `delicate facial features` 带权重
- [x] 1.2 更新 `realisticVision` 模型 `prompt_prefix`:添加 `(almond eyes:1.1)``(luminous fair skin:1.2)``(refined features:1.2)``(elegant temperament:1.1)`
- [x] 1.3 更新 `juggernautXL` 模型 `prompt_prefix`:添加 `(almond eyes:1.2)``(porcelain skin:1.2)``(delicate facial features:1.2)``(youthful appearance:1.1)``asian girl` 改为 `(chinese beauty:1.2)`
- [x] 1.4 在三个模型 `negative_prompt` 中补充 `strong jawline, prominent brow ridge, angular facial structure, square jaw, heavy brow`
- [x] 1.5 更新各 `PERSONA_SD_PROFILES` 人设的 `prompt_boost`:确保每个人设包含至少 1 个与中国审美相关的差异化气质词(甜妹-减龄感、知性-优雅气质、赛博博主-精致感 + 未来感等)
## 2. sd_service.py — 高画质预设与 Hires Fix
- [x] 2.1 在 `majicmixRealistic` `presets` 中新增 `"高画质 (约5分钟)"`steps=50, cfg_scale=7.0, width=640, height=960, sampler="DPM++ SDE", scheduler="Karras", batch_size=1
- [x] 2.2 在 `realisticVision` `presets` 中新增 `"高画质 (约5分钟)"`steps=50, cfg_scale=7.0, width=640, height=960, sampler="DPM++ SDE", scheduler="Karras", batch_size=1
- [x] 2.3 在 `juggernautXL` `presets` 中新增 `"高画质 (约5分钟)"`steps=40, cfg_scale=6.0, width=832, height=1216, sampler="DPM++ 2M SDE", scheduler="Karras", batch_size=1并增加 `enable_hr: True, hr_scale: 1.5, hr_upscaler: "4x-UltraSharp", hr_second_pass_steps: 20, denoising_strength: 0.4`
- [x] 2.4 在 `txt2img` 方法中:当 preset 包含 `enable_hr` 字段时,将该字段注入 SD API payload当架构为 sd15 时强制忽略 `enable_hr`
- [x] 2.5 添加 Hires Fix upscaler fallback 逻辑:捕获 SD API 返回的 upscaler 不存在错误,降级为 `"Latent"` 并记录 warning 日志
## 3. sd_service.py — 美化增强管线拆分
- [x] 3.1 新增 `beauty_enhance(img: Image.Image, level: float = 1.0) -> Image.Image` 函数:`level=0` 时直接返回原图
- [x] 3.2 实现 `beauty_enhance` 锐化逻辑:使用 `ImageFilter.UnsharpMask(radius=1.5*level, percent=int(120*level), threshold=2)`
- [x] 3.3 实现 `beauty_enhance` 亮度/对比度/饱和度增强:亮度系数 `1.0 + 0.02*level`,对比度系数 `1.0 + 0.02*level`,饱和度系数 `1.0 + 0.05*level`
- [x] 3.4 实现 `beauty_enhance` 暖白肤色微调对皮肤色调区域做降红提蓝校正numpy 实现,无 numpy 时跳过)
- [x] 3.5 修改 `txt2img` 方法:在 `anti_detect_postprocess(img)` 调用前插入 `beauty_enhance(img, level=enhance_level)``enhance_level` 参数从方法签名传入(默认 `1.0`
- [x] 3.6 修改 `img2img` 方法:同样应用 `beauty_enhance → anti_detect_postprocess` 两阶段后处理
## 4. llm_service.py — Prompt 指南优化
- [x] 4.1 在 `get_sd_prompt_guide()` 中添加"人物描述规则"段落:明确禁止 `beautiful girl`/`pretty woman`/`good looking` 等泛化词
- [x] 4.2 在指南中添加五官/肤色/气质三维度词库:眼型推荐词(`almond eyes`, `phoenix eyes`, `bright doe eyes`)、肤色推荐词(`porcelain skin`, `luminous fair skin`, `translucent skin`)、气质推荐词(`youthful`, `elegant temperament`, `gentle charm`, `intellectual beauty`
- [x] 4.3 在指南中添加人物图 vs 纯场景图写作策略区分说明
- [x] 4.4 在指南中添加光影具体描述规范:要求使用 `soft window light`/`golden hour`/`studio lighting`/`diffused natural light` 等词,禁用 `good lighting`
## 5. services/content.py — 参数传递
- [x] 5.1 为 `generate_images()` 函数增加 `enhance_level: float = 1.0` 参数
- [x] 5.2 将 `enhance_level` 传入 `svc.txt2img()` 调用
## 6. ui/tab_create.py — UI 控件
- [x] 6.1 在"高级设置 (覆盖预设)"折叠面板中新增 `enhance_level` 滑块:`gr.Slider(0.0, 2.0, value=1.0, step=0.1, label="美化强度", info="0=关闭 1=默认 2=强化")`
- [x] 6.2 将 `enhance_level` 滑块值存入配置(`fn_cfg_set("enhance_level", ...)`)并在加载时读取
- [x] 6.3 将 `enhance_level` 加入 `btn_gen_img.click``inputs` 列表,并更新 `fn_gen_img` 函数签名
## 7. 验证与测试
- [ ] 7.1 生成含人物的测试图片:确认 `majicmixRealistic``juggernautXL` 输出人物具有明显中国面孔特征
- [ ] 7.2 测试高画质档SDXL 模型 API payload 中确认包含 `enable_hr: true`
- [ ] 7.3 测试 `beauty_enhance(img, level=0)` 返回原图不变
- [ ] 7.4 测试 `beauty_enhance(img, level=2.0)` 效果目视无偏黄/偏红
- [ ] 7.5 验证 `generate_images()` 不传 `enhance_level` 时行为与修改前一致(回归测试)

View File

@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-02-26

View File

@ -0,0 +1,36 @@
## Context
项目工作区根目录(`f:\3_Personal\AI\xhs_bot\autobot\`)随开发过程积累了多个无用文件:旧版备份(`main_v1_backup.py`)、配置副本(`config copy.json`)、一次性测试脚本(`_test_config_save.py`)、临时记录(`Todo.md`)、运行日志(`autobot.log``以及散落的个人图片4 张 `.png`/`.jpg`)。
这些文件虽已被 `.gitignore` 排除出版本控制,但依然存在于工作区,干扰了目录可读性。无需改动任何运行时逻辑。
## Goals / Non-Goals
**Goals:**
- 删除 5 个明确无用的文件
- 将 4 张个人图片归入 `assets/faces/` 目录并更新 `.gitignore`
- 完善 `.gitignore` 注释与规则覆盖
- 建立 `project-structure` 规范 spec为后续维护提供参考
**Non-Goals:**
- 不将根目录服务文件(`analytics_service.py``llm_service.py` 等)迁移至 `services/`(涉及 import 改动,留作独立变更)
- 不修改任何 Python 业务代码
- 不调整 Docker / CI 配置
## Decisions
**决策 1图片归档至 `assets/faces/`**
- 选择 `assets/faces/` 而非直接删除,因为换脸功能可能在 UI 中需要默认头像路径,保留文件可避免破坏演示/教学场景
- 替代方案:直接删除 → 风险是用户文档引用了这些文件路径
**决策 2`assets/faces/` 整体纳入 `.gitignore`**
- 个人图片属于隐私敏感数据,不应进入版本控制
- 在 `.gitignore` 中新增 `assets/faces/` 规则,并补充注释
**决策 3`Todo.md` 直接删除而非迁移**
- 内容已过时且已有 openspec 任务管理体系替代,迁移意义不大
## Risks / Trade-offs
- **[风险] `my_face.png` 等文件路径可能被 `config.json` 引用** → 缓解:删除前检查 `config.json` 中是否存在相关路径字段;若存在则仅移动不删除,并在 `config.example.json` 中更新说明
- **[风险] `_autostart.bat` / `_autostart.vbs``assets/faces/` 路径无关,但同属根目录非标准文件** → 此次保留,自动启动脚本属于有效使用场景

View File

@ -0,0 +1,33 @@
## Why
项目根目录混入备份文件、临时测试脚本、个人图片等无关文件,导致目录结构混乱、难以辨别哪些是正式代码。部分内容(人脸图片、日志文件)已被 `.gitignore` 排除但仍残留在工作区,须手动清理。
## What Changes
- **删除冗余/备份文件**
- `main_v1_backup.py`(旧版备份,已被 `.gitignore``*_backup.py` 规则覆盖)
- `config copy.json`(配置副本,已被 `.gitignore` 覆盖)
- `_test_config_save.py`(一次性测试脚本,无保留价值)
- `Todo.md`(个人临时记录,已有 openspec 任务管理替代)
- `autobot.log`(运行日志,已被 `*.log` 规则覆盖)
- **整理个人图片**:将散落在根目录的人脸/头像图片(`beauty.png``myself.jpg``my_face.png``zjz.png`)移入 `assets/faces/` 目录,更新 `.gitignore` 将该目录纳入忽略范围
- **评估根目录服务文件**:检查 `analytics_service.py``llm_service.py``sd_service.py``mcp_client.py``publish_queue.py``config_manager.py` 是否应迁移至 `services/`;若涉及大量 import 改动则列为独立后续变更,本次仅做评估记录
- **补全 `.gitignore`**:确保 `assets/faces/``*.log``__pycache__/` 等规则完整且注释清晰
## Capabilities
### New Capabilities
- `project-structure`: 定义项目标准目录结构、根目录文件清单规范及 `.gitignore` 策略
### Modified Capabilities
(无需修改现有 spec
## Impact
- 删除 5 个文件,不影响任何运行时逻辑
- `assets/faces/` 目录新增后,换脸功能的默认头像路径可能需要在 `config.example.json` 中更新说明
- 对 Docker 构建无影响(`.dockerignore` 已存在)
- 无 API 变更,无依赖变更

View File

@ -0,0 +1,49 @@
## ADDED Requirements
### Requirement: 根目录只包含正式项目文件
项目根目录 SHALL 不包含备份文件、一次性测试脚本、个人媒体资源或临时记录文件。具体地:
- `*_backup.py` 命名的文件 SHALL 不存在于根目录
- `config copy.json`或同类配置副本SHALL 不存在
- `_test_*.py` 命名的一次性测试脚本 SHALL 不存在于根目录
- `*.log` 运行日志文件 SHALL 不进入版本控制(由 `.gitignore` 保证)
- 个人图片(`.png``.jpg` 等媒体文件SHALL 不散落在根目录,统一归入 `assets/` 下对应子目录
#### Scenario: 克隆仓库后根目录无冗余文件
- **WHEN** 开发者执行 `git clone` 并查看根目录
- **THEN** 根目录 SHALL 仅包含:`main.py`、服务模块文件(`*_service.py``*_client.py``*_manager.py``*_queue.py`)、标准配置(`config.example.json``requirements.txt``Dockerfile``docker-compose.yml``.gitignore``.dockerignore`)、自动启动脚本(`_autostart.*`)、文档(`README.md``CHANGELOG.md``CONTRIBUTING.md``mcp.md`
#### Scenario: 备份文件不出现在版本控制
- **WHEN** 开发者执行 `git status``git ls-files`
- **THEN** 输出中 SHALL 不包含 `*_backup.py``config copy.json` 等备份/副本文件
### Requirement: 媒体资源归入 assets/ 目录
项目所需的图片等媒体资源 SHALL 存放于 `assets/` 目录下的对应子目录,根据用途分类:
- 换脸/头像相关图片 SHALL 放入 `assets/faces/`
- `assets/faces/` SHALL 被 `.gitignore` 覆盖(不进入版本控制,属隐私数据)
#### Scenario: 换脸图片存放路径符合规范
- **WHEN** 用户配置换脸头像功能
- **THEN** 头像文件 SHALL 存放于 `assets/faces/` 目录,而非项目根目录
#### Scenario: assets/faces/ 不进入版本控制
- **WHEN** 开发者执行 `git status`
- **THEN** `assets/faces/` 目录下的文件 SHALL 显示为已忽略(不出现在 staged 或 unstaged 区域)
### Requirement: .gitignore 覆盖所有非版本控制内容
项目 `.gitignore` SHALL 包含以下规则类别,且每类规则 SHALL 有注释说明用途:
- Python 编译产物(`__pycache__/``*.py[cod]`
- 虚拟环境目录(`.venv/``venv/`
- 敏感配置(`config.json``cookies.json``*.cookie`
- 运行日志(`*.log``logs/`
- 备份与副本文件(`*_backup.py``config copy.json`
- 个人媒体资产(`assets/faces/`
- IDE 配置(`.vscode/``.idea/`
- 系统文件(`.DS_Store``Thumbs.db`
- 工作空间输出目录(`xhs_workspace/`
#### Scenario: 新增备份文件不被 git 追踪
- **WHEN** 开发者在根目录创建 `main_v2_backup.py`
- **THEN** `git status` SHALL 将其显示为已忽略文件

View File

@ -0,0 +1,25 @@
## 1. 删除冗余文件
- [x] 1.1 删除 `main_v1_backup.py`(旧版备份,无保留价值)
- [x] 1.2 删除 `config copy.json`(配置副本,无保留价值)
- [x] 1.3 删除 `_test_config_save.py`(一次性测试脚本)
- [x] 1.4 删除 `Todo.md`(临时记录,已由 openspec 替代)
- [x] 1.5 删除 `autobot.log`(运行日志,已被 `.gitignore` 覆盖)
## 2. 整理图片资源
- [x] 2.1 创建 `assets/faces/` 目录
- [x] 2.2 将 `my_face.png` 移入 `assets/faces/my_face.png`
- [x] 2.3 将 `beauty.png``myself.jpg``zjz.png` 移入 `assets/faces/`
- [x] 2.4 更新 `sd_service.py` 第 22 行的 `FACE_IMAGE_PATH`,将路径从 `"my_face.png"` 改为 `os.path.join(os.path.dirname(__file__), "assets", "faces", "my_face.png")`
## 3. 完善 .gitignore
- [x] 3.1 在 `.gitignore` 中新增 `assets/faces/` 规则,并加注释(个人头像不入版本控制)
- [x] 3.2 确认 `.gitignore``*.log` 规则已存在(已有,检查确认即可)
## 4. 回归验证
- [ ] 4.1 启动应用 `python main.py`,确认换脸功能(⚙️ 配置 Tab AI 换脸头像)加载正常
- [ ] 4.2 确认 `SDService.load_face_image()` 能正确读取新路径下的 `my_face.png`
- [x] 4.3 执行 `git status` 确认 `assets/faces/` 已被忽略,根目录无残留冗余文件

View File

@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-02-27

View File

@ -0,0 +1,55 @@
## Context
当前项目根目录同时存在 6 个业务服务文件(`config_manager.py``llm_service.py``sd_service.py``mcp_client.py``analytics_service.py``publish_queue.py`)与 `services/` 包目录并列,架构层次混乱。`services/` 下游代码(`scheduler.py`通过裸名导入flat import引用这些文件依赖 Python 将根目录隐式加入 `sys.path` 这一运行时特性。
迁移约束:
- `services/__init__.py` 已存在,`services/` 是合法 Python 包
- 所有受影响文件共约 20 处 import分布在 `main.py``ui/app.py``ui/tab_create.py``services/*.py`
- 必须保证 Python 不产生循环导入circular import
## Goals / Non-Goals
**Goals:**
- 将 6 个服务文件移入 `services/`,根目录只保留 `main.py` 一个 Python 文件
- 所有 import 使用绝对路径(`from services.xxx import ...`),保持一致性、可读性
- `services/` 内部文件之间使用相对导入(`from .xxx import ...`),减少路径依赖
- 不引入任何新的运行时依赖或行为变更
**Non-Goals:**
- 不拆分或合并任何模块的内部逻辑
- 不更改 `services/__init__.py` 的公开导出(除非需要兼容性垫片)
- 不迁移 `ui/` 下的文件(已有独立模块结构)
## Decisions
**决策 1外部文件用绝对导入 `from services.xxx import ...`**
- `main.py``ui/app.py``ui/tab_create.py` 均从根目录运行,绝对导入路径清晰、错误信息可读
- 替代:在 `services/__init__.py` 重新导出所有符号(向后兼容)→ 增加维护负担,拒绝
**决策 2`services/` 内文件之间用相对导入 `from .xxx import ...`**
- 避免 `services/` 内部对根目录的隐式依赖,打包或测试时不受 `sys.path` 影响
- 替代:也用绝对导入 → 可行但不如相对导入内聚
**决策 3分两阶段迁移先移文件再改 import**
- 防止中间状态同时修改多文件导致错误难以定位
- 迁移顺序:`config_manager``mcp_client``llm_service``sd_service``analytics_service``publish_queue`(按依赖深度从浅到深)
**决策 4不添加兼容性垫片shim**
- 项目无外部 PyPI 消费者,无向后兼容压力,直接修改所有 import 更干净
## Risks / Trade-offs
- **[风险] `services/` 内循环导入** → 缓解:迁移前用 `grep` 确认依赖关系,`config_manager` 通常是叶节点(最少依赖),优先迁移
- **[风险] `ui/app.py` 在迁移中途启动失败** → 缓解:一次性修改所有文件 import不留半迁移状态迁移后立即用 `ast.parse()` 验证语法
- **[风险] Dockerfile `COPY . .` 已是整目录复制** → 无影响,`services/` 子目录正常复制
- **[风险] CI 首次运行可能因 GitHub Secrets 缺失(如无 `GITHUB_TOKEN`)报错** → 缓解CI 仅做静态检查,不需要 Secrets
## Migration Plan
1. 将 6 个文件 `Move-Item``services/`
2. 批量替换所有外部文件(`main.py``ui/*.py`)中的 import`from xxx``from services.xxx`
3. 批量替换 `services/*.py` 中的内部 import`from xxx``from .xxx`
4. 语法验证:对所有修改文件执行 `ast.parse()`
5. 启动验证:`python -c "from ui.app import build_app"` 确认无导入错误
**回滚方案:** 删除 `services/` 下新迁移的文件,将 git 恢复(或手动移回根目录),改回 import

View File

@ -0,0 +1,57 @@
## Why
项目代码已具备完整功能但存在两个问题①缺少优秀开源项目的标准配置Issue/PR 模板、CI、Code of Conduct、Security Policy降低社区协作门槛和可信度②根目录混杂 6 个服务文件(`llm_service.py``sd_service.py``mcp_client.py``analytics_service.py``publish_queue.py``config_manager.py`),与 `services/` 模块目录并列,架构层次不清晰。两个问题结合目录清理后的稳定状态一并解决。
## What Changes
### 目录结构整理
- **将 6 个根目录服务文件迁移至 `services/`**
- `config_manager.py``services/config_manager.py`
- `llm_service.py``services/llm_service.py`
- `sd_service.py``services/sd_service.py`
- `mcp_client.py``services/mcp_client.py`
- `analytics_service.py``services/analytics_service.py`
- `publish_queue.py``services/publish_queue.py`
- **更新所有受影响文件的 import 语句**`main.py``ui/app.py``ui/tab_create.py``services/*.py` 共约 20 处)
- **更新 `services/__init__.py`** 导出新增模块(可选,保持向后兼容)
目标根目录结构:
```
autobot/
├── main.py # 程序入口(唯一根目录 .py
├── ui/ # UI 层 (Gradio)
├── services/ # 全部业务逻辑层(迁移后)
├── assets/ # 静态资源
├── config.example.json # 配置模板
├── requirements.txt # 生产依赖
├── requirements-dev.txt # 开发依赖(新增)
├── Dockerfile / docker-compose.yml
└── *.md / LICENSE # 文档
```
### 开源社区标准文件
- **新增 GitHub Issue 模板**`.github/ISSUE_TEMPLATE/bug_report.md``feature_request.md`
- **新增 PR 模板**`.github/pull_request_template.md`
- **新增 CI 工作流**`.github/workflows/ci.yml`Push/PR 触发 ruff + 导入验证)
- **新增 Code of Conduct**`CODE_OF_CONDUCT.md`Contributor Covenant v2.1 中文版)
- **新增 Security Policy**`SECURITY.md`
- **新增 `requirements-dev.txt`** — ruff、pre-commit
- **完善 README**顶部徽章Python、License、CI、修正项目结构图、替换 `your-username` 占位符
## Capabilities
### New Capabilities
- `project-restructure`: 根目录服务文件迁移至 `services/`,全量 import 更新,分层架构清晰
- `oss-community-health`: Issue 模板、PR 模板、Code of Conduct、Security Policy
- `oss-ci-workflow`: GitHub Actions CIruff 代码检查 + 导入验证)
- `oss-readme-polish`: README 徽章、结构说明修正、占位符修复
### Modified Capabilities
(无需修改现有 spec
## Impact
- `project-restructure`:影响 `main.py``ui/app.py``ui/tab_create.py``services/` 下全部文件(仅修改 import 路径,不改业务逻辑)
- 社区文件和 CI 仅新增,不修改现有代码
- Dockerfile 的 `COPY` 指令兼容(整目录复制,无需修改)
- 对运行时行为、Docker 部署、依赖均无影响

View File

@ -0,0 +1,37 @@
## ADDED Requirements
### Requirement: GitHub Actions CI 工作流在 Push 和 PR 时自动触发
项目 SHALL 在 `.github/workflows/ci.yml` 提供持续集成工作流,在每次 Push 到 `main` 分支或创建 Pull Request 时自动执行代码质量检查。
#### Scenario: CI 在 PR 时自动运行
- **WHEN** 贡献者向 `main` 分支提交 Pull Request
- **THEN** GitHub Actions SHALL 自动触发 CI 工作流,在 PR 页面显示检查结果
#### Scenario: CI 通过后状态徽章更新
- **WHEN** CI 工作流执行完毕
- **THEN** 工作流状态 SHALL 可通过 Badge URL 获取,用于 README 展示
### Requirement: CI 执行 ruff 代码风格检查
CI 工作流 SHALL 使用 `ruff` 对所有 Python 文件执行代码风格和常见错误检查ruff 配置 SHALL 允许项目现有代码通过(宽松规则集)。
#### Scenario: 代码风格检查通过
- **WHEN** CI 工作流中的 ruff 步骤执行
- **THEN**`*.py` 文件的 ruff 检查 SHALL 以 exit code 0 退出,工作流标记为 passed
#### Scenario: 引入明显错误时 CI 失败
- **WHEN** PR 中包含未使用的 import 或明显语法问题
- **THEN** ruff SHALL 检测到并返回非零 exit code导致 CI 失败
### Requirement: CI 执行导入验证
CI 工作流 SHALL 执行 Python 导入验证步骤,确认核心模块可被正常导入,及早发现迁移引入的导入错误。
#### Scenario: 导入验证通过
- **WHEN** CI 执行导入验证步骤
- **THEN** `python -c "from services.config_manager import ConfigManager"` 等关键导入 SHALL 成功执行
### Requirement: 提供 requirements-dev.txt 开发依赖声明
项目根目录 SHALL 包含 `requirements-dev.txt`,声明开发和 CI 所需依赖ruff、pre-commit 等),与生产依赖 `requirements.txt` 分离。
#### Scenario: 安装开发依赖命令可正常执行
- **WHEN** 开发者执行 `pip install -r requirements-dev.txt`
- **THEN** 所有开发工具 SHALL 成功安装,无版本冲突

View File

@ -0,0 +1,33 @@
## ADDED Requirements
### Requirement: 提供标准化 GitHub Issue 模板
项目 SHALL 在 `.github/ISSUE_TEMPLATE/` 下提供至少两个 Issue 模板Bug 报告模板和功能请求模板,引导贡献者提供必要信息。
#### Scenario: Bug 报告模板包含必要字段
- **WHEN** 用户在 GitHub 上新建 Issue 并选择「Bug 报告」
- **THEN** 模板 SHALL 包含问题描述、复现步骤、预期行为、实际行为、环境信息Python 版本、操作系统)
#### Scenario: 功能请求模板包含场景描述
- **WHEN** 用户选择「功能请求」模板
- **THEN** 模板 SHALL 包含:问题/需求背景、期望的解决方案、替代方案考虑
### Requirement: 提供 Pull Request 模板
项目 SHALL 在 `.github/pull_request_template.md` 提供 PR 模板,引导贡献者说明变更范围和测试情况。
#### Scenario: PR 模板包含变更说明和测试确认
- **WHEN** 贡献者在 GitHub 上发起 Pull Request
- **THEN** 模板 SHALL 包含变更类型Bug Fix / Feature / Docs / Refactor、变更描述、测试说明、相关 Issue 引用
### Requirement: 包含行为准则文件
项目根目录 SHALL 包含 `CODE_OF_CONDUCT.md`,采用 Contributor Covenant v2.1 中文版,明确社区行为规范和违规处理方式。
#### Scenario: 行为准则文件可访问
- **WHEN** 贡献者查看项目根目录
- **THEN** `CODE_OF_CONDUCT.md` SHALL 存在,包含社区行为规范、适用范围、执行说明、联系方式
### Requirement: 包含安全漏洞报告政策
项目根目录 SHALL 包含 `SECURITY.md`,说明如何负责任地披露安全漏洞、支持的版本范围和响应时间承诺。
#### Scenario: 安全政策文件包含报告方式
- **WHEN** 安全研究者发现漏洞
- **THEN** `SECURITY.md` SHALL 提供私下联系方式(邮件或 GitHub Security Advisory不要求通过公开 Issue 上报

View File

@ -0,0 +1,29 @@
## ADDED Requirements
### Requirement: README 顶部展示状态徽章
`README.md` 顶部标题下方SHALL 包含至少三枚徽章Python 版本要求、License 类型、CI 状态,采用 shields.io 或 GitHub Actions 徽章格式。
#### Scenario: 徽章在 GitHub 页面正常渲染
- **WHEN** 访问项目 GitHub 主页
- **THEN** README 顶部 SHALL 显示可点击的 Python、MIT License、CI 状态徽章,链接指向对应资源
### Requirement: README 项目结构图反映实际代码
`README.md` 中的「项目结构」章节 SHALL 反映迁移后的实际目录结构,包含 `services/`(含所有迁移后文件)和 `ui/`(含 `app.py``tab_create.py`)的正确层级。
#### Scenario: 项目结构与 ls 输出一致
- **WHEN** 开发者对照 README 查看实际文件目录
- **THEN** README 的结构图 SHALL 与实际 `Get-ChildItem` / `ls` 输出一致,无过时文件或缺失目录
### Requirement: README 不包含 your-username 占位符
`README.md` 中所有 `your-username` 占位符 SHALL 替换为实际仓库路径说明或格式示例,使克隆/安装命令可直接复制使用。
#### Scenario: 安装命令无需手动替换占位符
- **WHEN** 用户复制 README 中的 `git clone` 命令
- **THEN** 命令 SHALL 包含实际仓库 URL 或明确的 `<your-github-username>` 格式提示,不出现 `your-username` 字符串
### Requirement: README 使用指南与当前 UI 结构匹配
`README.md` 中的「使用指南」和「首次使用流程」章节 SHALL 引用当前正确的 Tab 名称和操作路径,与 `ui/app.py` 实际 Tab 顺序保持一致(⚙️ 配置 Tab 已迁移,不再是「展开全局设置折叠块」)。
#### Scenario: 首次使用步骤描述与 UI 一致
- **WHEN** 新用户按照 README「首次使用流程」操作
- **THEN** README 中描述的 Tab 名称和操作入口 SHALL 与实际 Gradio UI 一致,用户无需猜测

View File

@ -0,0 +1,34 @@
## ADDED Requirements
### Requirement: 服务层文件统一归入 services/ 包
所有业务服务模块 SHALL 位于 `services/` 目录下,根目录除 `main.py` 外 SHALL 不包含任何 `.py` 业务文件。
迁移文件清单:
- `config_manager.py``services/config_manager.py`
- `llm_service.py``services/llm_service.py`
- `sd_service.py``services/sd_service.py`
- `mcp_client.py``services/mcp_client.py`
- `analytics_service.py``services/analytics_service.py`
- `publish_queue.py``services/publish_queue.py`
#### Scenario: 根目录不存在游离服务文件
- **WHEN** 开发者查看项目根目录
- **THEN** 根目录 SHALL 仅含 `main.py` 作为唯一 Python 入口,其余 `.py` 文件均位于 `ui/``services/` 子目录
### Requirement: 外部模块使用绝对导入访问 services/
`main.py``ui/app.py``ui/tab_create.py` 等根目录/UI 层文件在导入服务模块时 SHALL 使用绝对导入格式 `from services.<module> import ...`
#### Scenario: main.py 正常启动无 ImportError
- **WHEN** 在项目根目录执行 `python main.py`
- **THEN** 应用 SHALL 正常启动,不抛出任何 `ImportError``ModuleNotFoundError`
#### Scenario: UI 层导入路径正确
- **WHEN** 执行 `python -c "import ui.app"`
- **THEN** 不抛出导入错误,所有 `from services.*` 引用 SHALL 可正常解析
### Requirement: services/ 内部使用相对导入
`services/` 包内各模块之间的相互引用 SHALL 使用相对导入格式 `from .<module> import ...`,不依赖根目录在 `sys.path` 中的位置。
#### Scenario: services 内部导入独立于运行上下文
- **WHEN** 在任意工作目录执行 `python -m services.scheduler`(或类似模块测试)
- **THEN** 内部相对导入 SHALL 正常解析,不因工作目录不同而失败

View File

@ -0,0 +1,186 @@
## Tasks
### 1. 迁移服务文件至 services/ 包project-restructure
- [x] **1.1**`config_manager.py` 移入 `services/`
```powershell
Move-Item config_manager.py services\config_manager.py
```
- [x] **1.2**`mcp_client.py` 移入 `services/`
```powershell
Move-Item mcp_client.py services\mcp_client.py
```
- [x] **1.3**`llm_service.py` 移入 `services/`
```powershell
Move-Item llm_service.py services\llm_service.py
```
- [x] **1.4**`sd_service.py` 移入 `services/`
```powershell
Move-Item sd_service.py services\sd_service.py
```
- [x] **1.5**`analytics_service.py` 移入 `services/`
```powershell
Move-Item analytics_service.py services\analytics_service.py
```
- [x] **1.6**`publish_queue.py` 移入 `services/`
```powershell
Move-Item publish_queue.py services\publish_queue.py
```
---
### 2. 更新外部文件的绝对导入main.py、ui/
- [x] **2.1** 更新 `main.py` 中的导入
- `from config_manager import ConfigManager, OUTPUT_DIR``from services.config_manager import ConfigManager, OUTPUT_DIR`
- `from llm_service import LLMService``from services.llm_service import LLMService`
- [x] **2.2** 更新 `ui/app.py` 中的导入
- `from config_manager import ConfigManager``from services.config_manager import ConfigManager`
- `from sd_service import SDService, DEFAULT_NEGATIVE, FACE_IMAGE_PATH, ...``from services.sd_service import SDService, DEFAULT_NEGATIVE, FACE_IMAGE_PATH, ...`
- `from analytics_service import AnalyticsService``from services.analytics_service import AnalyticsService`
- `from publish_queue import STATUS_LABELS``from services.publish_queue import STATUS_LABELS`
- [x] **2.3** 更新 `ui/tab_create.py` 中的导入(检查并替换所有根目录服务模块引用)
---
### 3. 更新 services/ 内部使用相对导入
- [x] **3.1** 更新 `services/scheduler.py`
- `from config_manager import ConfigManager, OUTPUT_DIR``from .config_manager import ConfigManager, OUTPUT_DIR`
- `from llm_service import LLMService``from .llm_service import LLMService`
- `from sd_service import SDService``from .sd_service import SDService`
- `from mcp_client import get_mcp_client``from .mcp_client import get_mcp_client`
- `from analytics_service import AnalyticsService``from .analytics_service import AnalyticsService`
- [x] **3.2** 更新 `services/content.py`
- `from config_manager import ConfigManager, OUTPUT_DIR``from .config_manager import ConfigManager, OUTPUT_DIR`
- `from llm_service import LLMService``from .llm_service import LLMService`
- `from sd_service import SDService, get_sd_preset``from .sd_service import SDService, get_sd_preset`
- `from mcp_client import get_mcp_client``from .mcp_client import get_mcp_client`
- [x] **3.3** 更新 `services/hotspot.py`
- `from llm_service import LLMService``from .llm_service import LLMService`
- `from mcp_client import get_mcp_client``from .mcp_client import get_mcp_client`
- [x] **3.4** 更新 `services/engagement.py`
- `from mcp_client import get_mcp_client``from .mcp_client import get_mcp_client`
- `from llm_service import LLMService``from .llm_service import LLMService`
- [x] **3.5** 更新 `services/profile.py`
- `from mcp_client import get_mcp_client``from .mcp_client import get_mcp_client`
- [x] **3.6** 更新 `services/persona.py`
- `from config_manager import ConfigManager``from .config_manager import ConfigManager`
- [x] **3.7** 检查 `services/queue_ops.py``services/rate_limiter.py``services/autostart.py``services/connection.py` 有无根目录模块引用,按需更新
---
### 4. 回归验证——导入与语法检查
- [x] **4.1** 对所有修改文件执行 Python 语法验证
```powershell
python -c "
import ast, pathlib
files = ['main.py','ui/app.py','ui/tab_create.py',
'services/scheduler.py','services/content.py',
'services/hotspot.py','services/engagement.py',
'services/profile.py','services/persona.py']
for f in files:
ast.parse(pathlib.Path(f).read_text(encoding='utf-8'))
print(f'OK: {f}')
"
```
- [x] **4.2** 执行核心服务导入验证
```powershell
python -c "from services.config_manager import ConfigManager; print('config_manager OK')"
python -c "from services.llm_service import LLMService; print('llm_service OK')"
python -c "from services.sd_service import SDService; print('sd_service OK')"
python -c "from services.mcp_client import get_mcp_client; print('mcp_client OK')"
python -c "from services.analytics_service import AnalyticsService; print('analytics_service OK')"
python -c "from services.publish_queue import STATUS_LABELS; print('publish_queue OK')"
```
- [x] **4.3** 执行 UI 层导入验证
```powershell
python -c "import ui.app; print('ui.app OK')"
```
- [x] **4.4** 确认根目录无游离 `.py` 业务文件
```powershell
Get-ChildItem -Path . -MaxDepth 1 -Filter "*.py" | Select-Object Name
# 预期仅显示 main.py以及测试脚本如 _test_config_save.py
```
---
### 5. 添加社区健康文件oss-community-health
- [x] **5.1** 创建 `.github/ISSUE_TEMPLATE/bug_report.md`Bug 报告模板)
包含问题描述、复现步骤、预期行为、实际行为、环境信息Python 版本、OS
- [x] **5.2** 创建 `.github/ISSUE_TEMPLATE/feature_request.md`(功能请求模板)
包含:背景/需求、期望解决方案、替代方案
- [x] **5.3** 创建 `.github/pull_request_template.md`PR 模板)
包含变更类型Bug Fix / Feature / Docs / Refactor、变更描述、测试说明、相关 Issue
- [x] **5.4** 创建 `CODE_OF_CONDUCT.md`Contributor Covenant v2.1 中文版)
- [x] **5.5** 创建 `SECURITY.md`(安全漏洞报告政策)
包含支持版本、私下报告方式GitHub Security Advisory、响应时间承诺
---
### 6. 添加 CI 工作流oss-ci-workflow
- [x] **6.1** 创建 `requirements-dev.txt`,包含 `ruff>=0.4.0`
- [x] **6.2** 创建 `.github/workflows/ci.yml`
- trigger: `push` to `main``pull_request` to `main`
- job `lint`:
- `pip install ruff`
- `ruff check . --select E,F,W --ignore E501`(宽松规则,忽略行长)
- job `import-check`:
- `pip install -r requirements.txt`
- `python -c "from services.config_manager import ConfigManager"`
- `python -c "from services.llm_service import LLMService"`
- `python -c "from services.sd_service import SDService"`
---
### 7. 完善 READMEoss-readme-polish
- [x] **7.1** 在 README 标题下方添加徽章Python、MIT License、CI Status
```markdown
![Python](https://img.shields.io/badge/python-3.10+-blue)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
[![CI](https://github.com/<your-github-username>/autobot/actions/workflows/ci.yml/badge.svg)](https://github.com/<your-github-username>/autobot/actions/workflows/ci.yml)
```
> 将 `<your-github-username>` 替换为实际 GitHub 用户名
- [x] **7.2** 修正 README 中的「项目结构」章节,反映迁移后 `services/` 的完整内容
- [x] **7.3** 全局搜索替换 `your-username` 占位符
```powershell
Select-String -Path README.md -Pattern "your-username"
# 确认所有出现位置后,手动或批量替换
```
- [x] **7.4** 检查「首次使用流程」中的 Tab 名称与实际 Gradio UI 一致
---
### 8. 最终验证
- [x] **8.1** 执行 `git status` 确认所有变更文件符合预期
- [x] **8.2** 执行 `git diff --stat` 确认无意外文件被修改
- [x] **8.3** 启动应用:`python main.py` 确认 Gradio UI 正常加载,无启动错误

18
openspec/config.yaml Normal file
View File

@ -0,0 +1,18 @@
schema: spec-driven
# Project context (optional)
# This is shown to AI when creating artifacts.
# Add your tech stack, conventions, style guides, domain knowledge, etc.
# Example:
context: |
使用中文
# Per-artifact rules (optional)
# Add custom rules for specific artifacts.
# Example:
# rules:
# proposal:
# - Keep proposals under 500 words
# - Always include a "Non-goals" section
# tasks:
# - Break tasks into chunks of max 2 hours

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: JSON 文件写入使用原子操作
`ConfigManager.save()``AnalyticsService._save_analytics()``AnalyticsService._save_weights()` SHALL 使用「写临时文件 → `os.replace()` 原子重命名」的方式持久化数据。临时文件 SHALL 创建于与目标文件相同的目录(同卷),以确保 `os.replace()` 的原子性。
#### Scenario: 写入过程中进程中断不产生损坏文件
- **WHEN** JSON 写入过程中进程被强制终止
- **THEN** 目标文件保持写入前的完整状态,不出现空文件或半写入的 JSON
#### Scenario: 正常写入成功替换目标文件
- **WHEN** `ConfigManager.save()` 被调用且数据合法
- **THEN** 目标 `config.json` 被更新为最新内容,写入前存在的临时文件已被清理
#### Scenario: 临时文件与目标文件在同一目录
- **WHEN** 调用任意原子写函数
- **THEN** 临时文件的父目录与目标文件的父目录相同(通过 `tempfile.mkstemp(dir=<target_dir>)` 实现)

View File

@ -0,0 +1,35 @@
## ADDED Requirements
### Requirement: 各 SD 模型 prompt_prefix 包含中国审美面孔特征词
系统 SHALL 在 `SD_MODEL_PROFILES` 每个模型的 `prompt_prefix` 中包含以下中国审美特征词(具体权重值由各模型风格微调,但不得省略):
- 眼型:`(almond eyes:1.2)``(delicate almond-shaped eyes:1.2)`
- 肤色:`(porcelain skin:1.2)``(fair porcelain skin:1.2)``(luminous fair skin:1.2)`
- 五官精致感:`(delicate facial features:1.2)``(refined features:1.2)`
- 气质:`(youthful appearance:1.1)``(elegant temperament:1.1)`
#### Scenario: 生成的人物图像呈现中国大众偏好的精致感
- **WHEN** 使用任意 SD 模型生成含人物的图片
- **THEN** `prompt_prefix` 中包含杏眼、白皙肤色、精致五官三类词汇中各至少一个
### Requirement: 各 SD 模型 negative_prompt 补充欧美面孔排除词
系统 SHALL 在 `SD_MODEL_PROFILES` 每个模型的 `negative_prompt` 中包含以下欧美面孔特征排除词(在现有基础上补充):
- `strong jawline``prominent brow ridge``deep-set eyes`(如已存在则保留权重)
- `angular facial structure``square jaw``heavy brow`
#### Scenario: negative_prompt 明确排除欧美面孔结构特征
- **WHEN** 任意模型进行人物图片生成
- **THEN** negative_prompt 中包含 `strong jawline``prominent brow ridge``deep-set eyes` 至少两个排除词
### Requirement: PERSONA_SD_PROFILES 各人设包含中式气质差异化增强词
系统 SHALL 在 `PERSONA_SD_PROFILES` 中,每个人设的 `prompt_boost` 包含至少 1 个与中国审美相关的气质词,且各人设之间的气质词须有差异化区分(如:甜妹-减龄感、知性-优雅气质、赛博博主-精致感 + 未来感)。
#### Scenario: 不同人设生成的人物具有可感知的气质差异
- **WHEN** 分别以"甜妹"和"知性"人设各生成一批图片
- **THEN** 两批图片中 `prompt_boost` 的核心气质词不重复,体现差异化风格
### Requirement: 美化增强对肤色偏向做修正
`beauty_enhance()` 函数 SHALL 在饱和度提升时对色调偏暖方向微调(暖白/自然肤色),避免饱和度提升导致肤色偏黄或偏红。具体实现通过调整 PIL 色调Hue向暖白方向 ±5° 以内微调。
#### Scenario: 美化后肤色不发黄不发红
- **WHEN** 调用 `beauty_enhance(img, level=1.0)` 对含人物的图片处理
- **THEN** 输出图片肤色区域饱和度提升的同时,色调保持在自然暖白范围内(目视无偏黄/偏红现象)

View File

@ -0,0 +1,38 @@
## ADDED Requirements
### Requirement: beauty_enhance 作为独立美化增强函数
系统 SHALL 在 `sd_service.py` 中提供 `beauty_enhance(img: Image.Image, level: float = 1.0) -> Image.Image` 函数,支持以下增强操作(所有操作的强度随 `level` 线性缩放):
- 智能锐化(基于 `ImageFilter.UnsharpMask`,强调五官轮廓与发丝细节)
- 亮度与对比度微增(`level=1.0` 时各 +2-3%`level=2.0` 时各 +4-6%
- 饱和度提升(`level=1.0` 时 +5%`level=2.0` 时 +10%,令肤色更均匀饱满)
- `level=0` 时 SHALL 直接返回原图,跳过所有处理
#### Scenario: 正常调用增强管线
- **WHEN** 调用 `beauty_enhance(img, level=1.0)`
- **THEN** 返回经过锐化、亮度微调、饱和度提升处理的 PIL Image图片尺寸不变
#### Scenario: level=0 时跳过处理
- **WHEN** 调用 `beauty_enhance(img, level=0)`
- **THEN** 直接返回原始 img 对象,不执行任何增强操作
#### Scenario: level=2 时增强效果加倍
- **WHEN** 调用 `beauty_enhance(img, level=2.0)`
- **THEN** 锐化、亮度、饱和度的调整幅度均为 level=1.0 时的 2 倍
### Requirement: 后处理管线顺序为美化先于反 AI 扰动
系统 SHALL 在 `txt2img``img2img` 生成流程中,对每张输出图片依次执行:`beauty_enhance(img, level) → anti_detect_postprocess(img)`,确保美化增强在扰动引入之前完成。
#### Scenario: 生成图片经过完整两阶段后处理
- **WHEN** `txt2img` 成功生成图片
- **THEN** 每张图片先经过 `beauty_enhance`,再经过 `anti_detect_postprocess`,最终返回给调用方
### Requirement: enhance_level 参数从 UI 传递至后处理管线
系统 SHALL 支持 `enhance_level: float` 参数从 Gradio UI 经 `services/content.py``generate_images()` 函数传递至 `SDService.txt2img()`,最终传入 `beauty_enhance()`。新参数默认值为 `1.0`,向后兼容。
#### Scenario: UI 美化强度滑块值传递到生成结果
- **WHEN** 用户在"高级设置"中将美化强度滑块调整为 2.0 并点击生成
- **THEN** 生成图片经过 `beauty_enhance(img, level=2.0)` 处理
#### Scenario: 旧调用方不传 enhance_level 时行为不变
- **WHEN** `generate_images()` 未传入 `enhance_level` 参数
- **THEN** 默认使用 `level=1.0`,行为与优化前相同

View File

@ -0,0 +1,23 @@
## ADDED Requirements
### Requirement: 各 SD 模型新增高画质预设档
系统 SHALL 在 `SD_MODEL_PROFILES` 的每个模型 `presets` 字典中新增 `"高画质 (约5分钟)"` 档位参数须满足SD 1.5 模型步数 ≥ 50、CFG 6.5-7.5、采样器为 `DPM++ SDE`SDXL 模型步数 ≥ 40、CFG 5.5-6.5、采样器为 `DPM++ 2M SDE`,并启用 Hires Fix 参数(`enable_hr: true`)。
#### Scenario: 用户选择高画质档后生成请求包含 Hires Fix
- **WHEN** 用户将质量模式切换为"高画质 (约5分钟)"且当前模型为 SDXL 架构
- **THEN** `txt2img` API 请求 payload 中包含 `enable_hr: true``hr_scale: 1.5``hr_upscaler` 字段
#### Scenario: SD 1.5 模型高画质档不启用 Hires Fix
- **WHEN** 用户将质量模式切换为"高画质 (约5分钟)"且当前模型 `arch == "sd15"`
- **THEN** payload 中不含 `enable_hr` 字段,以避免 OOM
#### Scenario: Hires Fix upscaler 不存在时降级
- **WHEN** `hr_upscaler` 指定的放大器模型(如 "4x-UltraSharp")在 SD WebUI 中不可用
- **THEN** 系统 SHALL 自动降级为 `"Latent"` 作为 fallback并通过日志记录警告
### Requirement: 预设名称在 UI 中按模型动态更新
系统 SHALL 在用户切换 SD 模型后,更新 UI 质量模式单选按钮的选项列表,使其始终反映当前模型的可用预设档名称。
#### Scenario: 切换模型后预设列表刷新
- **WHEN** 用户在连接设置中切换 SD 模型
- **THEN** 创作页"生成模式"单选按钮的选项 SHALL 更新为该模型 `presets` 中的键名列表

Some files were not shown because too many files have changed in this diff Show More