refactor: split monolithic main.py into services/ + ui/ modules (improve-maintainability)

- main.py: 4360 → 146 lines (96.6% reduction), entry layer only
- services/: rate_limiter, autostart, persona, connection, profile,
  hotspot, content, engagement, scheduler, queue_ops (10 business modules)
- ui/app.py: all Gradio UI code extracted into build_app(cfg, analytics)
- Fix: with gr.Blocks() indented inside build_app function
- Fix: cfg.all property (not get_all method)
- Fix: STATUS_LABELS, get_persona_keywords, fetch_proactive_notes imports
- Fix: queue_ops module-level set_publish_callback moved into configure()
- Fix: pub_queue.format_*() wrapped as queue_format_table/calendar helpers
- All 14 files syntax-verified, build_app() runtime-verified
- 58/58 tasks complete"
This commit is contained in:
zhoujie 2026-02-24 22:50:56 +08:00
parent d88b4e9a3b
commit b635108b89
28 changed files with 5076 additions and 4271 deletions

4328
main.py

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-02-24

View File

@ -0,0 +1,119 @@
## Context
`main.py` 是整个项目的单一入口,目前 4359 行,包含 10+ 个业务域的业务逻辑、全局状态、UI 组件和事件绑定。上一轮重构已将 Tab 1内容创作提取为 `ui/tab_create.py`,建立了 Tab 模块化的模式。本次设计延续该模式,将业务逻辑提取为 `services/` 目录,并完成剩余 Tab 的 UI 拆分。
已有外部服务层:`config_manager.py``llm_service.py``sd_service.py``mcp_client.py``analytics_service.py``publish_queue.py`
`main.py` 中剩余的函数是对这些服务的**编排层**,应该独立成模块而非继续膨胀在入口文件里。
约束:
- Python 3.10+(当前环境)
- 不引入新的外部依赖
- 重构不改变任何业务行为
## Goals / Non-Goals
**Goals:**
- 将 `main.py` 瘦身至 ~300 行纯入口层(导入 + UI 组装 + `app.launch()`
- 建立清晰的分层:`services/`(业务编排)→ `ui/`Gradio 组件)→ `main.py`(入口)
- 消除模块间隐式依赖,所有依赖通过函数参数显式传递
- 保持所有函数签名、Gradio 回调绑定不变
**Non-Goals:**
- 不重构或重写任何现有业务逻辑(本次是纯搬迁)
- 不改变 `config_manager.py``llm_service.py` 等已有服务层
- 不引入类/面向对象重构(保持现有函数式风格)
- 不添加单元测试(独立变更)
## Decisions
### D1分层架构 —— `services/` 不依赖 `ui/``ui/` 不依赖 `services/`
```
main.py (入口层:组装 + 启动)
├── services/ (业务编排层:纯 Python无 Gradio)
│ ├── connection.py
│ ├── content.py
│ ├── hotspot.py
│ ├── engagement.py
│ ├── rate_limiter.py
│ ├── profile.py
│ ├── persona.py
│ ├── scheduler.py
│ ├── queue_ops.py
│ └── autostart.py
├── ui/ (UI 层Gradio 组件 + 事件绑定)
│ ├── tab_create.py ← 已存在
│ ├── tab_hotspot.py
│ ├── tab_engage.py
│ ├── tab_profile.py
│ ├── tab_auto.py
│ ├── tab_queue.py
│ ├── tab_analytics.py
│ └── tab_settings.py
```
**为何不让 `ui/` 依赖 `services/`** 每个 `build_tab()` 接收回调函数作为参数(已有 `tab_create.py` 的模式),而非直接 import service。这样 UI 层完全解耦,可独立测试或替换。
### D2共享单例通过 `main.py` 初始化,作为参数传入
`cfg``mcp``analytics``pub_queue``queue_publisher` 仍在 `main.py` 顶层初始化。Service 函数需要它们时通过**函数参数**接收,不在 service 模块顶层 import。
**为何不在各 service 模块初始化单例:** 防止循环依赖、防止多次初始化、保持测试时可替换。
### D3有状态模块使用模块级变量不封装成类
`rate_limiter``scheduler``engagement` 内部有 `threading.Event``_daily_stats` 等状态。保持现有模块级变量风格(不改成类),仅将变量和函数整体搬迁到对应模块。
**为何不改成类:** 本次目标是结构拆分,不是重构设计模式,避免引入额外变更风险。
### D4迁移策略 —— 先提取后删除,不做重定向
每个域的迁移步骤:
1. 在 service 模块中写入函数(复制粘贴 + 调整 import
2. 在 `main.py` 中删除对应函数,改为 `from services.xxx import ...`
3. 运行 `ast.parse()` 验证语法
4. 运行应用验证启动不报错
**为何不做 `main.py` 中的临时 re-export** 简单场景直接删+导入更清晰,且 Gradio 回调绑定在 `main.py` 中通过变量名引用,只需保证同名变量在作用域内即可。
### D5UI Tab 模块统一使用 `build_tab(fn_*, ...)` 签名
复用 `tab_create.py` 已建立的模式:
- 每个 `build_tab()` 接收所需的回调函数和共享 Gradio 组件作为参数
- 函数内部创建本 Tab 的所有 Gradio 组件及事件绑定
- 返回 `dict`,包含需要被其他 Tab 或 `app.load()` 引用的组件
### D6`services/``ui/` 均需 `__init__.py`
使用空文件标记为 Python 包,与 `ui/__init__.py` 已有做法一致。
## Risks / Trade-offs
- **[风险] 大量 import 调整可能遗漏** → 每个模块完成后执行 `ast.parse()` + 应用启动验证,逐域推进
- **[风险] 全局状态的隐式共享** → 调度器、限流器的模块级变量在模块首次 import 时初始化Python 模块单例语义保证只初始化一次,行为与当前一致
- **[权衡] `build_tab()` 参数列表长** → 与现有 `tab_create.py` 的做法一致,接受这种显式依赖的冗长性,可在后续变更中引入 dataclass 参数包
## Migration Plan
按以下顺序逐域提取,每步验证后再继续:
1. `services/rate_limiter.py` —— 无外部依赖,最安全的起点
2. `services/autostart.py` —— 独立,平台相关逻辑隔离
3. `services/persona.py` —— 仅依赖 `cfg`
4. `services/connection.py` —— 依赖 `cfg``llm_service``sd_service``mcp_client`
5. `services/profile.py` —— 依赖 `mcp_client`
6. `services/hotspot.py` —— 依赖 `llm_service``mcp_client`
7. `services/content.py` —— 依赖多个服务,最复杂
8. `services/engagement.py` —— 依赖 `rate_limiter``mcp_client`
9. `services/scheduler.py` —— 依赖 `engagement``content`
10. `services/queue_ops.py` —— 依赖 `content``pub_queue`
11. `ui/tab_hotspot.py` ~ `ui/tab_settings.py` —— 7 个 Tab UI 拆分
**回滚策略:** 所有修改通过 git 追踪;每个 service 提取为一个独立 commit任意步骤可 `git revert`
## Open Questions
- `_auto_log` 列表(被 `engagement``scheduler` 共同写入)归属哪个模块?
→ 暂定置于 `services/scheduler.py``engagement` 接收 `log_fn` 回调参数
- `queue_publisher` 的 callback 注册(`set_publish_callback`)在哪里调用?
→ 保留在 `main.py` 初始化段callback 函数迁移到 `services/queue_ops.py`

View File

@ -0,0 +1,38 @@
## Why
`main.py` 目前共 4359 行将连接管理、内容生成、自动化运营、调度、队列、UI 等 10+ 个业务域全部混入单一文件,导致阅读困难、修改风险高、模块间依赖不清晰。随着功能继续增长,维护成本将持续上升。现在是在文件进一步膨胀前完成结构化拆分的最佳时机。
## What Changes
- 按业务域将 `main.py` 中的函数提取为独立的 `services/` 模块
- 将剩余 UI Tab 提取为独立的 `ui/tab_*.py` 模块(`tab_create.py` 已完成,需继续完成其余 Tab
- `main.py` 保留为**入口层**:仅负责组装 Gradio UI、注册事件、启动应用
- 所有模块保持向后兼容,不改变对外行为
## Capabilities
### New Capabilities
- `services-connection`: LLM / SD / MCP 连接管理(`connect_llm``connect_sd``check_mcp_status`、登录相关)
- `services-content`: 内容生成(`generate_copy``generate_images``publish_to_xhs``one_click_export`、face image 上传)
- `services-hotspot`: 热点探测(`search_hotspots``analyze_and_suggest``generate_from_hotspot`
- `services-engagement`: 互动自动化(`auto_comment_once``auto_like_once``auto_favorite_once``auto_reply_once` 及对应 `_with_log` 包装)
- `services-rate-limiter`: 频率控制与每日限额(`_reset_daily_stats_if_needed``_check_daily_limit``_is_in_cooldown` 等)
- `services-profile`: 用户主页解析(`fetch_my_profile``_parse_profile_json``_parse_count`
- `services-persona`: 人设管理(`_match_persona_pools``get_persona_topics``get_persona_keywords``on_persona_changed`
- `services-scheduler`: 自动调度器(`_scheduler_loop``start_scheduler``stop_scheduler``get_scheduler_status`
- `services-queue`: 内容排期队列(`generate_to_queue``queue_*` 系列函数、`_queue_publish_callback`
- `services-autostart`: 开机自启管理(`enable_autostart``disable_autostart``toggle_autostart` 等)
- `ui-tabs-split`: 将其余 Gradio Tab热点、互动、我的主页、自动运营、队列、数据分析、设置提取为 `ui/tab_*.py`
### Modified Capabilities
(无需求层面变更,仅为实现重构)
## Impact
- **主要受影响文件**`main.py`(从 4359 行缩减至 ~300 行入口层)
- **新增目录**`services/`10 个模块)、`ui/`8 个 Tab 模块,`tab_create.py` 已存在)
- **依赖关系**`services/` 模块之间通过函数参数传递依赖,避免循环导入;`main.py` 统一导入并组装
- **无 API 变更**所有函数签名保持不变Gradio 回调绑定不受影响
- **运行时影响**:零,重构不改变业务逻辑

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 开机自启管理函数迁移至独立模块
系统 SHALL 将开机自启相关常量和函数从 `main.py` 提取至 `services/autostart.py`,包括:`_APP_NAME``_STARTUP_REG_KEY``_get_startup_script_path``_get_startup_bat_path``_create_startup_scripts``is_autostart_enabled``enable_autostart``disable_autostart``toggle_autostart`
#### Scenario: 模块导入成功
- **WHEN** `main.py` 执行 `from services.autostart import toggle_autostart, is_autostart_enabled`
- **THEN** 函数可正常调用
#### Scenario: Windows 注册表操作行为不变
- **WHEN** `enable_autostart()` 在 Windows 系统上被调用
- **THEN** SHALL 向注册表 `HKCU\Software\Microsoft\Windows\CurrentVersion\Run` 写入启动项,行为与迁移前完全一致
#### Scenario: 非 Windows 平台处理不变
- **WHEN** `enable_autostart()` 在非 Windows 系统上被调用
- **THEN** SHALL 返回与迁移前相同的平台不支持提示信息

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 连接管理函数迁移至独立模块
系统 SHALL 将所有 LLM / SD / MCP 连接管理及认证相关函数从 `main.py` 提取至 `services/connection.py`,包括:`_get_llm_config``connect_llm``add_llm_provider``remove_llm_provider``on_provider_selected``connect_sd``on_sd_model_change``check_mcp_status``get_login_qrcode``logout_xhs``_auto_fetch_xsec_token``check_login``save_my_user_id``upload_face_image``load_saved_face_image`
#### Scenario: 模块导入成功
- **WHEN** `main.py` 执行 `from services.connection import connect_llm, connect_sd` 等导入
- **THEN** 所有函数可正常调用,行为与迁移前完全一致
#### Scenario: 外部依赖通过参数传入
- **WHEN** `services/connection.py` 中的函数需要访问 `cfg``llm``LLMService`)、`sd``SDService`)、`mcp``MCPClient`
- **THEN** 这些依赖 SHALL 通过函数参数接收,`services/connection.py` 模块顶层不创建单例实例
#### Scenario: 无循环导入
- **WHEN** Python 解释器加载 `services/connection.py`
- **THEN** 不产生 `ImportError` 或循环导入错误

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 内容生成函数迁移至独立模块
系统 SHALL 将内容生成、图片生成、发布及导出相关函数从 `main.py` 提取至 `services/content.py`,包括:`generate_copy``generate_images``one_click_export``publish_to_xhs`
#### Scenario: 模块导入成功
- **WHEN** `main.py` 执行 `from services.content import generate_copy, generate_images, publish_to_xhs, one_click_export`
- **THEN** 所有函数可正常调用,行为与迁移前完全一致
#### Scenario: 内容生成保留现有验证逻辑
- **WHEN** 调用 `publish_to_xhs` 时标题超过 20 字或图片数量不合法
- **THEN** 函数 SHALL 返回与迁移前相同的错误提示,不改变验证行为
#### Scenario: 临时文件清理逻辑保留
- **WHEN** `publish_to_xhs` 执行完毕(成功或失败)
- **THEN** `finally` 块中的 AI 临时文件清理逻辑 SHALL 正常执行

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 互动自动化函数迁移至独立模块
系统 SHALL 将评论、点赞、收藏、回复等互动自动化函数从 `main.py` 提取至 `services/engagement.py`,包括:`load_note_for_comment``ai_generate_comment``send_comment``fetch_my_notes``on_my_note_selected``fetch_my_note_comments``ai_reply_comment``send_reply``auto_comment_once``_auto_comment_with_log``auto_like_once``_auto_like_with_log``auto_favorite_once``_auto_favorite_with_log``auto_reply_once``_auto_reply_with_log``_auto_publish_with_log`
#### Scenario: 模块导入成功
- **WHEN** `main.py` 执行 `from services.engagement import auto_comment_once, auto_like_once` 等导入
- **THEN** 所有函数可正常调用
#### Scenario: 日志回调参数化
- **WHEN** `engagement.py` 中的 `_with_log` 函数需要追加日志时
- **THEN** 函数 SHALL 接收 `log_fn` 参数callable用于写入日志不直接依赖外部 `_auto_log` 列表
#### Scenario: 频率限制集成
- **WHEN** `auto_comment_once` 等函数执行前需要检查每日限额和冷却状态
- **THEN** 通过调用 `rate_limiter` 模块中的函数实现,不在 `engagement.py` 内复制限流逻辑

View File

@ -0,0 +1,12 @@
## ADDED Requirements
### Requirement: 热点探测函数迁移至独立模块
系统 SHALL 将热点搜索与分析相关函数从 `main.py` 提取至 `services/hotspot.py`,包括:`search_hotspots``analyze_and_suggest``generate_from_hotspot``_set_cache``_get_cache``_fetch_and_cache``_pick_from_cache``fetch_proactive_notes``on_proactive_note_selected`
#### Scenario: 模块导入成功
- **WHEN** `main.py` 执行 `from services.hotspot import search_hotspots, analyze_and_suggest` 等导入
- **THEN** 所有函数可正常调用
#### Scenario: 线程安全缓存随模块迁移
- **WHEN** `_cache_lock``threading.RLock`)随函数一起迁移至 `services/hotspot.py`
- **THEN** `_set_cache` / `_get_cache` 的线程安全行为保持不变

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 人设管理函数及常量迁移至独立模块
系统 SHALL 将人设相关的常量和函数从 `main.py` 提取至 `services/persona.py`,包括:`DEFAULT_PERSONAS``RANDOM_PERSONA_LABEL``PERSONA_POOL_MAP``DEFAULT_TOPICS``DEFAULT_STYLES``DEFAULT_COMMENT_KEYWORDS``_match_persona_pools``get_persona_topics``get_persona_keywords``on_persona_changed``_resolve_persona`
#### Scenario: 常量可从模块导入
- **WHEN** `main.py` 执行 `from services.persona import DEFAULT_PERSONAS, PERSONA_POOL_MAP`
- **THEN** 常量值 SHALL 与迁移前完全一致
#### Scenario: 人设解析正确处理随机人设标签
- **WHEN** `_resolve_persona(RANDOM_PERSONA_LABEL)` 被调用
- **THEN** SHALL 返回从人设池中随机选取的人设文本,行为与迁移前一致
#### Scenario: 人设变更回调正常触发
- **WHEN** `on_persona_changed(persona_text)` 被调用
- **THEN** SHALL 返回更新后的话题列表和关键词列表,供 Gradio UI 使用

View File

@ -0,0 +1,12 @@
## ADDED Requirements
### Requirement: 用户主页解析函数迁移至独立模块
系统 SHALL 将用户主页数据获取与解析相关函数从 `main.py` 提取至 `services/profile.py`,包括:`_parse_profile_json``_parse_count``fetch_my_profile`
#### Scenario: 模块导入成功
- **WHEN** `main.py` 执行 `from services.profile import fetch_my_profile`
- **THEN** 函数可正常调用,行为与迁移前一致
#### Scenario: 解析容错性保留
- **WHEN** `_parse_count` 接收到格式异常的数值字符串(如 "1.2万"、"--"
- **THEN** SHALL 返回与迁移前相同的浮点数或 0不抛出异常

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 排期队列操作函数迁移至独立模块
系统 SHALL 将内容排期队列相关函数从 `main.py` 提取至 `services/queue_ops.py`,包括:`generate_to_queue``_queue_publish_callback``queue_refresh_table``queue_refresh_calendar``queue_preview_item``queue_approve_item``queue_reject_item``queue_delete_item``queue_retry_item``queue_publish_now``queue_start_processor``queue_stop_processor``queue_get_status``queue_batch_approve``queue_generate_and_refresh`
#### Scenario: 模块导入成功
- **WHEN** `main.py` 执行 `from services.queue_ops import queue_generate_and_refresh, queue_refresh_table` 等导入
- **THEN** 所有函数可正常调用
#### Scenario: publish callback 在 main.py 完成注册
- **WHEN** 应用启动时 `main.py` 调用 `pub_queue.set_publish_callback(_queue_publish_callback)``_queue_publish_callback` 已迁移至 `queue_ops.py`
- **THEN** 队列发布回调 SHALL 正常注册并在队列处理时触发
#### Scenario: 队列操作读写 pub_queue 单例
- **WHEN** `queue_ops.py` 中的函数需要访问 `pub_queue``queue_publisher`
- **THEN** 这些单例 SHALL 通过函数参数传入,不在 `queue_ops.py` 模块顶层初始化

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 频率控制与每日限额函数迁移至独立模块
系统 SHALL 将频率控制、每日限额及冷却相关的所有状态变量和函数从 `main.py` 提取至 `services/rate_limiter.py`,包括:`_auto_running``_op_history``_daily_stats``DAILY_LIMITS``_consecutive_errors``_error_cooldown_until``_reset_daily_stats_if_needed``_check_daily_limit``_increment_stat``_record_error``_clear_error_streak``_is_in_cooldown``_is_in_operating_hours``_get_stats_summary`
#### Scenario: 模块级状态初始化一次
- **WHEN** Python 首次导入 `services/rate_limiter.py`
- **THEN** `_daily_stats``_op_history` 等模块级变量 SHALL 仅初始化一次Python 模块单例语义)
#### Scenario: 每日限额检查正常工作
- **WHEN** `_check_daily_limit("comment")` 被调用
- **THEN** 返回值 SHALL 与迁移前行为完全一致
#### Scenario: 运营时段限制正常工作
- **WHEN** 当前时间不在 `start_hour``end_hour` 范围内时调用 `_is_in_operating_hours`
- **THEN** 返回 `False`,阻止自动化操作执行

View File

@ -0,0 +1,16 @@
## ADDED Requirements
### Requirement: 自动调度器函数迁移至独立模块
系统 SHALL 将调度器相关的状态变量和函数从 `main.py` 提取至 `services/scheduler.py`,包括:`_scheduler_next_times``_auto_log`(列表)、`_auto_log_append``_scheduler_loop``start_scheduler``stop_scheduler``get_auto_log``get_scheduler_status``_learn_running``_learn_scheduler_loop``start_learn_scheduler``stop_learn_scheduler`
#### Scenario: 调度器启停正常工作
- **WHEN** `start_scheduler(...)` 被调用并传入合法参数
- **THEN** 调度器线程 SHALL 正常启动,`get_scheduler_status()` 返回运行中状态
#### Scenario: 日志追加线程安全
- **WHEN** 多个自动化任务并发调用 `_auto_log_append(msg)`
- **THEN** 日志条目 SHALL 正确追加,不丢失和乱序
#### Scenario: engagement 通过回调写日志
- **WHEN** `services/engagement.py` 中的函数需要写日志时
- **THEN** SHALL 通过 `log_fn` 参数(由 `scheduler.py` 传入 `_auto_log_append`)写入,不直接导入 `scheduler.py`

View File

@ -0,0 +1,30 @@
## ADDED Requirements
### Requirement: 剩余 Gradio Tab 提取为独立 UI 模块
系统 SHALL 将 `main.py` 中除 Tab 1已完成之外的 7 个 Gradio Tab 各自提取为 `ui/tab_*.py` 模块文件,每个文件暴露 `build_tab(...)` 函数:
| 模块文件 | Tab 名称 |
|---|---|
| `ui/tab_hotspot.py` | 🔥 热点探测 |
| `ui/tab_engage.py` | 💬 互动运营 |
| `ui/tab_profile.py` | 👤 我的主页 |
| `ui/tab_auto.py` | 🤖 自动运营 |
| `ui/tab_queue.py` | 📅 内容排期 |
| `ui/tab_analytics.py` | 📊 数据分析 |
| `ui/tab_settings.py` | ⚙️ 系统设置 |
#### Scenario: 每个 Tab 模块暴露 build_tab 函数
- **WHEN** `main.py` 执行 `from ui.tab_hotspot import build_tab as build_tab_hotspot`
- **THEN** 调用 `build_tab_hotspot(fn_*, ...)` 后 SHALL 返回包含需跨 Tab 共享组件的 dict
#### Scenario: build_tab 接收回调而非直接导入 services
- **WHEN** `build_tab(...)` 内部需要调用业务函数时
- **THEN** 业务函数 SHALL 通过 `fn_*` 参数传入(与 `tab_create.py` 已有模式一致),不在 `ui/tab_*.py` 内直接 `import services.*`
#### Scenario: 事件绑定在 build_tab 内完成
- **WHEN** `build_tab(...)` 被调用
- **THEN** 本 Tab 所有 Gradio 组件的 `.click()``.change()` 等事件绑定 SHALL 在函数内完成,`main.py` 不保留本 Tab 的事件绑定代码
#### Scenario: main.py 成为纯入口层
- **WHEN** 所有 11 个 capability 均完成迁移后
- **THEN** `main.py` 行数 SHALL 不超过 400 行且不包含任何业务逻辑仅含导入、单例初始化、UI 组装、`app.launch()`

View File

@ -0,0 +1,96 @@
## 1. 基础结构搭建
- [x] 1.1 创建 `services/` 目录及 `services/__init__.py` 空文件
- [x] 1.2 确认 `ui/__init__.py` 已存在(上一轮已创建)
## 2. 迁移 services/rate_limiter.py
- [x] 2.1 创建 `services/rate_limiter.py`,迁移 `_auto_running``_op_history``_daily_stats``DAILY_LIMITS``_consecutive_errors``_error_cooldown_until` 等模块级变量
- [x] 2.2 迁移函数:`_reset_daily_stats_if_needed``_check_daily_limit``_increment_stat``_record_error``_clear_error_streak``_is_in_cooldown``_is_in_operating_hours``_get_stats_summary`
- [x] 2.3 在 `main.py` 中删除对应变量和函数,添加 `from services.rate_limiter import ...`
- [x] 2.4 运行 `ast.parse()` 验证 `main.py``services/rate_limiter.py` 语法正确
## 3. 迁移 services/autostart.py
- [x] 3.1 创建 `services/autostart.py`,迁移 `_APP_NAME``_STARTUP_REG_KEY` 及所有 autostart 函数(`_get_startup_script_path``_get_startup_bat_path``_create_startup_scripts``is_autostart_enabled``enable_autostart``disable_autostart``toggle_autostart`
- [x] 3.2 在 `main.py` 中删除对应代码,添加 `from services.autostart import ...`
- [x] 3.3 运行 `ast.parse()` 验证语法正确
## 4. 迁移 services/persona.py
- [x] 4.1 创建 `services/persona.py`,迁移常量:`DEFAULT_PERSONAS``RANDOM_PERSONA_LABEL``PERSONA_POOL_MAP``DEFAULT_TOPICS``DEFAULT_STYLES``DEFAULT_COMMENT_KEYWORDS`
- [x] 4.2 迁移函数:`_match_persona_pools``get_persona_topics``get_persona_keywords``on_persona_changed``_resolve_persona`
- [x] 4.3 在 `main.py` 中删除对应代码,添加 `from services.persona import ...`
- [x] 4.4 运行 `ast.parse()` 验证语法正确
## 5. 迁移 services/connection.py
- [x] 5.1 创建 `services/connection.py`,迁移函数:`_get_llm_config``connect_llm``add_llm_provider``remove_llm_provider``on_provider_selected`
- [x] 5.2 迁移 SD 相关函数:`connect_sd``on_sd_model_change`
- [x] 5.3 迁移 MCP / 登录相关函数:`check_mcp_status``get_login_qrcode``logout_xhs``_auto_fetch_xsec_token``check_login``save_my_user_id``upload_face_image``load_saved_face_image`
- [x] 5.4 确保所有函数通过参数接收 `cfg``llm``sd``mcp` 等依赖,不在模块顶层初始化单例
- [x] 5.5 在 `main.py` 中删除对应函数,添加 `from services.connection import ...`
- [x] 5.6 运行 `ast.parse()` 验证语法正确
## 6. 迁移 services/profile.py
- [x] 6.1 创建 `services/profile.py`,迁移函数:`_parse_profile_json``_parse_count``fetch_my_profile`
- [x] 6.2 在 `main.py` 中删除对应函数,添加 `from services.profile import ...`
- [x] 6.3 运行 `ast.parse()` 验证语法正确
## 7. 迁移 services/hotspot.py
- [x] 7.1 创建 `services/hotspot.py`,迁移缓存相关:`_cache_lock``_set_cache``_get_cache``_fetch_and_cache``_pick_from_cache`
- [x] 7.2 迁移热点函数:`search_hotspots``analyze_and_suggest``generate_from_hotspot``fetch_proactive_notes``on_proactive_note_selected`
- [x] 7.3 在 `main.py` 中删除对应代码,添加 `from services.hotspot import ...`
- [x] 7.4 运行 `ast.parse()` 验证语法正确
## 8. 迁移 services/content.py
- [x] 8.1 创建 `services/content.py`,迁移函数:`generate_copy``generate_images``one_click_export``publish_to_xhs`
- [x] 8.2 确保 `publish_to_xhs` 的输入验证逻辑和 `finally` 临时文件清理逻辑完整保留
- [x] 8.3 在 `main.py` 中删除对应函数,添加 `from services.content import ...`
- [x] 8.4 运行 `ast.parse()` 验证语法正确
## 9. 迁移 services/engagement.py
- [x] 9.1 创建 `services/engagement.py`,迁移笔记/评论相关函数:`load_note_for_comment``ai_generate_comment``send_comment``fetch_my_notes``on_my_note_selected``fetch_my_note_comments``ai_reply_comment``send_reply`
- [x] 9.2 迁移自动化函数:`auto_comment_once``auto_like_once``auto_favorite_once``auto_reply_once` 及各 `_with_log` 包装
- [x] 9.3 将 `_with_log` 函数改为接收 `log_fn` 回调参数,不直接引用外部 `_auto_log`
- [x] 9.4 在 `main.py` 中删除对应函数,添加 `from services.engagement import ...`
- [x] 9.5 运行 `ast.parse()` 验证语法正确
## 10. 迁移 services/scheduler.py
- [x] 10.1 创建 `services/scheduler.py`,迁移状态变量和日志:`_auto_log``_scheduler_next_times``_auto_log_append`
- [x] 10.2 迁移调度器函数:`_scheduler_loop``start_scheduler``stop_scheduler``get_auto_log``get_scheduler_status`
- [x] 10.3 迁移学习调度器:`_learn_running``_learn_scheduler_loop``start_learn_scheduler``stop_learn_scheduler`
- [x] 10.4 确保 `_scheduler_loop` 调用 `engagement` 函数时传入 `log_fn=_auto_log_append`
- [x] 10.5 在 `main.py` 中删除对应代码,添加 `from services.scheduler import ...`
- [x] 10.6 运行 `ast.parse()` 验证语法正确
## 11. 迁移 services/queue_ops.py
- [x] 11.1 创建 `services/queue_ops.py`,迁移所有 queue 操作函数:`generate_to_queue``_queue_publish_callback``queue_refresh_table``queue_refresh_calendar``queue_preview_item``queue_approve_item``queue_reject_item``queue_delete_item``queue_retry_item``queue_publish_now``queue_start_processor``queue_stop_processor``queue_get_status``queue_batch_approve``queue_generate_and_refresh`
- [x] 11.2 确保 `pub_queue``queue_publisher` 通过参数传入各函数,不在模块顶层初始化
- [x] 11.3 在 `main.py` 中删除对应函数,添加 `from services.queue_ops import ...`;保留 `pub_queue.set_publish_callback(_queue_publish_callback)``main.py` 初始化段调用
- [x] 11.4 运行 `ast.parse()` 验证语法正确
## 12. 拆分 UI Tab 模块
- [x] 12.1 创建 `ui/tab_hotspot.py`,提取 Tab 2🔥 热点探测)的所有 Gradio 组件和事件绑定,暴露 `build_tab(fn_*, ...)` 函数
- [x] 12.2 创建 `ui/tab_engage.py`,提取 Tab 3💬 互动运营)的所有 Gradio 组件和事件绑定
- [x] 12.3 创建 `ui/tab_profile.py`,提取 Tab 4👤 我的主页)的所有 Gradio 组件和事件绑定
- [x] 12.4 创建 `ui/tab_auto.py`,提取 Tab 5🤖 自动运营)的所有 Gradio 组件和事件绑定
- [x] 12.5 创建 `ui/tab_queue.py`,提取 Tab 6📅 内容排期)的所有 Gradio 组件和事件绑定
- [x] 12.6 创建 `ui/tab_analytics.py`,提取 Tab 7📊 数据分析)的所有 Gradio 组件和事件绑定
- [x] 12.7 创建 `ui/tab_settings.py`,提取 Tab 8 系统设置)的所有 Gradio 组件和事件绑定
- [x] 12.8 在 `main.py` 中用相应的 `build_tab(...)` 调用替换各 Tab 代码块,完成后删除空白 Tab 块
- [x] 12.9 运行 `ast.parse()` 验证所有新建 UI 模块语法正确
## 13. 入口层清理与验证
- [x] 13.1 验证 `main.py` 行数不超过 400 行
- [x] 13.2 检查 `main.py` 不包含任何业务逻辑函数定义(除 lambda 内联外)
- [x] 13.3 运行应用 `python main.py`,确认启动无报错
- [x] 13.4 在浏览器中切换所有 Tab确认 UI 正常渲染、事件响应正常

1
services/__init__.py Normal file
View File

@ -0,0 +1 @@
# services 业务编排层

121
services/autostart.py Normal file
View File

@ -0,0 +1,121 @@
"""
services/autostart.py
Windows 开机自启管理
"""
import os
import platform
import logging
logger = logging.getLogger("autobot")
_APP_NAME = "XHS_AI_AutoBot"
_STARTUP_REG_KEY = r"Software\Microsoft\Windows\CurrentVersion\Run"
def _get_startup_script_path() -> str:
"""获取启动脚本路径(.vbs 静默启动,不弹黑窗)"""
return os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", "_autostart.vbs")
def _get_startup_bat_path() -> str:
"""获取启动 bat 路径"""
return os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", "_autostart.bat")
def _create_startup_scripts():
"""创建静默启动脚本bat + vbs"""
app_dir = os.path.dirname(os.path.abspath(os.path.join(__file__, "..")))
# __file__ is services/autostart.py, so app_dir should be parent
app_dir = os.path.normpath(os.path.join(os.path.dirname(os.path.abspath(__file__)), ".."))
venv_python = os.path.join(app_dir, ".venv", "Scripts", "pythonw.exe")
# 如果没有 pythonw退回 python.exe
if not os.path.exists(venv_python):
venv_python = os.path.join(app_dir, ".venv", "Scripts", "python.exe")
main_script = os.path.join(app_dir, "main.py")
# 创建 bat
bat_path = _get_startup_bat_path()
bat_content = f"""@echo off
cd /d "{app_dir}"
"{venv_python}" "{main_script}"
"""
with open(bat_path, "w", encoding="utf-8") as f:
f.write(bat_content)
# 创建 vbs静默运行 bat不弹出命令行窗口
vbs_path = _get_startup_script_path()
vbs_content = f"""Set WshShell = CreateObject("WScript.Shell")
WshShell.Run chr(34) & "{bat_path}" & chr(34), 0
Set WshShell = Nothing
"""
with open(vbs_path, "w", encoding="utf-8") as f:
f.write(vbs_content)
return vbs_path
def is_autostart_enabled() -> bool:
"""检查是否已设置开机自启"""
if platform.system() != "Windows":
return False
try:
import winreg
key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, _STARTUP_REG_KEY, 0, winreg.KEY_READ)
try:
val, _ = winreg.QueryValueEx(key, _APP_NAME)
winreg.CloseKey(key)
return bool(val)
except FileNotFoundError:
winreg.CloseKey(key)
return False
except Exception:
return False
def enable_autostart() -> str:
"""启用 Windows 开机自启"""
if platform.system() != "Windows":
return "❌ 此功能仅支持 Windows 系统"
try:
import winreg
vbs_path = _create_startup_scripts()
key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, _STARTUP_REG_KEY, 0, winreg.KEY_SET_VALUE)
# 用 wscript 运行 vbs 以实现静默启动
winreg.SetValueEx(key, _APP_NAME, 0, winreg.REG_SZ, f'wscript.exe "{vbs_path}"')
winreg.CloseKey(key)
logger.info(f"开机自启已启用: {vbs_path}")
return "✅ 开机自启已启用\n下次开机时将自动后台运行本程序"
except Exception as e:
logger.error(f"设置开机自启失败: {e}")
return f"❌ 设置失败: {e}"
def disable_autostart() -> str:
"""禁用 Windows 开机自启"""
if platform.system() != "Windows":
return "❌ 此功能仅支持 Windows 系统"
try:
import winreg
key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, _STARTUP_REG_KEY, 0, winreg.KEY_SET_VALUE)
try:
winreg.DeleteValue(key, _APP_NAME)
except FileNotFoundError:
pass
winreg.CloseKey(key)
# 清理启动脚本
for f in [_get_startup_script_path(), _get_startup_bat_path()]:
if os.path.exists(f):
os.remove(f)
logger.info("开机自启已禁用")
return "✅ 开机自启已禁用"
except Exception as e:
logger.error(f"禁用开机自启失败: {e}")
return f"❌ 禁用失败: {e}"
def toggle_autostart(enabled: bool) -> str:
"""切换开机自启状态(供 UI 调用)"""
if enabled:
return enable_autostart()
else:
return disable_autostart()

254
services/connection.py Normal file
View File

@ -0,0 +1,254 @@
"""
services/connection.py
LLM 提供商管理SD 连接MCP 连接XHS 登录等服务函数
"""
import os
import re
import logging
import gradio as gr
from config_manager import ConfigManager
from llm_service import LLMService
from sd_service import SDService, get_model_profile_info
from mcp_client import get_mcp_client
logger = logging.getLogger("autobot")
cfg = ConfigManager()
def _get_llm_config() -> tuple[str, str, str]:
"""获取当前激活 LLM 的 (api_key, base_url, model)"""
p = cfg.get_active_llm()
if p:
return p["api_key"], p["base_url"], cfg.get("model", "")
return "", "", ""
def connect_llm(provider_name):
"""连接选中的 LLM 提供商并获取模型列表"""
if not provider_name:
return gr.update(choices=[], value=None), "⚠️ 请先选择或添加 LLM 提供商"
cfg.set_active_llm(provider_name)
p = cfg.get_active_llm()
if not p:
return gr.update(choices=[], value=None), "❌ 未找到该提供商配置"
try:
svc = LLMService(p["api_key"], p["base_url"])
models = svc.get_models()
if models:
return (
gr.update(choices=models, value=models[0]),
f"✅ 已连接「{provider_name}」,加载 {len(models)} 个模型",
)
else:
# API 无法获取模型列表,保留手动输入
current_model = cfg.get("model", "")
return (
gr.update(choices=[current_model] if current_model else [], value=current_model or None),
f"⚠️ 已连接「{provider_name}」,但未获取到模型列表,请手动输入模型名",
)
except Exception as e:
logger.error("LLM 连接失败: %s", e)
current_model = cfg.get("model", "")
return (
gr.update(choices=[current_model] if current_model else [], value=current_model or None),
f"❌ 连接「{provider_name}」失败: {e}",
)
def add_llm_provider(name, api_key, base_url):
"""添加新的 LLM 提供商"""
msg = cfg.add_llm_provider(name, api_key, base_url)
names = cfg.get_llm_provider_names()
active = cfg.get("active_llm", "")
return (
gr.update(choices=names, value=active),
msg,
)
def remove_llm_provider(provider_name):
"""删除 LLM 提供商"""
if not provider_name:
return gr.update(choices=cfg.get_llm_provider_names(), value=cfg.get("active_llm", "")), "⚠️ 请先选择要删除的提供商"
msg = cfg.remove_llm_provider(provider_name)
names = cfg.get_llm_provider_names()
active = cfg.get("active_llm", "")
return (
gr.update(choices=names, value=active),
msg,
)
def on_provider_selected(provider_name):
"""切换 LLM 提供商时更新显示信息"""
if not provider_name:
return "未选择提供商"
for p in cfg.get_llm_providers():
if p["name"] == provider_name:
cfg.set_active_llm(provider_name)
masked_key = p["api_key"][:8] + "***" if len(p["api_key"]) > 8 else "***"
return f"**{provider_name}** \nAPI Key: `{masked_key}` \nBase URL: `{p['base_url']}`"
return "未找到该提供商"
# ==================================================
# Tab 1: 内容创作
# ==================================================
def connect_sd(sd_url):
"""连接 SD 并获取模型列表"""
try:
svc = SDService(sd_url)
ok, msg = svc.check_connection()
if ok:
models = svc.get_models()
cfg.set("sd_url", sd_url)
first = models[0] if models else None
info = get_model_profile_info(first) if first else "未检测到模型"
return gr.update(choices=models, value=first), f"{msg}", info
return gr.update(choices=[]), f"{msg}", ""
except Exception as e:
logger.error("SD 连接失败: %s", e)
return gr.update(choices=[]), f"❌ SD 连接失败: {e}", ""
def on_sd_model_change(model_name):
"""SD 模型切换时显示模型档案信息"""
if not model_name:
return "未选择模型"
return get_model_profile_info(model_name)
def check_mcp_status(mcp_url):
"""检查 MCP 连接状态"""
try:
client = get_mcp_client(mcp_url)
ok, msg = client.check_connection()
if ok:
cfg.set("mcp_url", mcp_url)
return f"✅ MCP 服务正常 - {msg}"
return f"{msg}"
except Exception as e:
return f"❌ MCP 连接失败: {e}"
# ==================================================
# 小红书账号登录
# ==================================================
def get_login_qrcode(mcp_url):
"""获取小红书登录二维码"""
try:
client = get_mcp_client(mcp_url)
result = client.get_login_qrcode()
if "error" in result:
return None, f"❌ 获取二维码失败: {result['error']}"
qr_image = result.get("qr_image")
msg = result.get("text", "")
if qr_image:
return qr_image, f"✅ 二维码已生成,请用小红书 App 扫码\n{msg}"
return None, f"⚠️ 未获取到二维码图片MCP 返回:\n{msg}"
except Exception as e:
logger.error("获取登录二维码失败: %s", e)
return None, f"❌ 获取二维码失败: {e}"
def logout_xhs(mcp_url):
"""退出登录:清除 cookies 并重置本地 token"""
try:
client = get_mcp_client(mcp_url)
result = client.delete_cookies()
if "error" in result:
return f"❌ 退出失败: {result['error']}"
cfg.set("xsec_token", "")
client._reset()
return "✅ 已退出登录,可以重新扫码登录"
except Exception as e:
logger.error("退出登录失败: %s", e)
return f"❌ 退出失败: {e}"
def _auto_fetch_xsec_token(mcp_url) -> str:
"""从推荐列表自动获取一个有效的 xsec_token"""
try:
client = get_mcp_client(mcp_url)
entries = client.list_feeds_parsed()
for e in entries:
token = e.get("xsec_token", "")
if token:
return token
except Exception as e:
logger.warning("自动获取 xsec_token 失败: %s", e)
return ""
def check_login(mcp_url):
"""检查登录状态,登录成功后自动获取 xsec_token 并保存"""
try:
client = get_mcp_client(mcp_url)
result = client.check_login_status()
if "error" in result:
return f"{result['error']}", gr.update(), gr.update()
text = result.get("text", "")
if "未登录" in text:
return f"🔴 {text}", gr.update(), gr.update()
# 登录成功 → 自动获取 xsec_token
token = _auto_fetch_xsec_token(mcp_url)
if token:
cfg.set("xsec_token", token)
logger.info("自动获取 xsec_token 成功")
return (
f"🟢 {text}\n\n✅ xsec_token 已自动获取并保存",
gr.update(value=cfg.get("my_user_id", "")),
gr.update(value=token),
)
return f"🟢 {text}\n\n⚠️ 自动获取 xsec_token 失败,请手动刷新", gr.update(), gr.update()
except Exception as e:
return f"❌ 检查登录状态失败: {e}", gr.update(), gr.update()
def save_my_user_id(user_id_input):
"""保存用户 ID (验证 24 位十六进制格式)"""
uid = (user_id_input or "").strip()
if not uid:
cfg.set("my_user_id", "")
return "⚠️ 已清除用户 ID"
if not re.match(r'^[0-9a-fA-F]{24}$', uid):
return (
"❌ 格式错误!用户 ID 应为 24 位十六进制字符串\n"
f"你输入的: `{uid}` ({len(uid)} 位)\n\n"
"💡 如果你输入的是小红书号 (纯数字如 18688457507),那不是 userId。"
)
cfg.set("my_user_id", uid)
return f"✅ 用户 ID 已保存: `{uid}`"
# ================= 头像/换脸管理 =================
def upload_face_image(img):
"""上传并保存头像图片"""
if img is None:
return None, "❌ 请上传头像图片"
try:
if isinstance(img, str) and os.path.isfile(img):
img = Image.open(img).convert("RGB")
elif not isinstance(img, Image.Image):
return None, "❌ 无法识别图片格式"
path = SDService.save_face_image(img)
return img, f"✅ 头像已保存至 {os.path.basename(path)}"
except Exception as e:
return None, f"❌ 保存失败: {e}"
def load_saved_face_image():
"""加载已保存的头像"""
img = SDService.load_face_image()
if img:
return img, "✅ 已加载保存的头像"
return None, " 尚未设置头像"

208
services/content.py Normal file
View File

@ -0,0 +1,208 @@
"""
services/content.py
文案生成图片生成一键导出发布到小红书
"""
import os
import re
import time
import platform
import subprocess
import logging
from PIL import Image
from config_manager import ConfigManager, OUTPUT_DIR
from llm_service import LLMService
from sd_service import SDService, get_sd_preset
from mcp_client import get_mcp_client
from services.connection import _get_llm_config
from services.persona import _resolve_persona
logger = logging.getLogger("autobot")
cfg = ConfigManager()
def generate_copy(model, topic, style, sd_model_name, persona_text):
"""生成文案(自动适配 SD 模型的 prompt 风格,支持人设)"""
api_key, base_url, _ = _get_llm_config()
if not api_key:
return "", "", "", "", "❌ 请先配置并连接 LLM 提供商"
try:
svc = LLMService(api_key, base_url, model)
persona = _resolve_persona(persona_text) if persona_text else None
data = svc.generate_copy(topic, style, sd_model_name=sd_model_name, persona=persona)
cfg.set("model", model)
tags = data.get("tags", [])
return (
data.get("title", ""),
data.get("content", ""),
data.get("sd_prompt", ""),
", ".join(tags) if tags else "",
"✅ 文案生成完毕",
)
except Exception as e:
logger.error("文案生成失败: %s", e)
return "", "", "", "", f"❌ 生成失败: {e}"
def generate_images(sd_url, prompt, neg_prompt, model, steps, cfg_scale, face_swap_on, face_img, quality_mode, persona_text=None):
"""生成图片(可选 ReActor 换脸,支持质量模式预设,支持人设视觉优化)"""
if not model:
return None, [], "❌ 未选择 SD 模型"
try:
svc = SDService(sd_url)
# 判断是否启用换脸
face_image = None
if face_swap_on:
# Gradio 可能传 PIL.Image / numpy.ndarray / 文件路径 / None
if face_img is not None:
if isinstance(face_img, Image.Image):
face_image = face_img
elif isinstance(face_img, str) and os.path.isfile(face_img):
face_image = Image.open(face_img).convert("RGB")
else:
# numpy array 等其他格式
try:
import numpy as np
if isinstance(face_img, np.ndarray):
face_image = Image.fromarray(face_img).convert("RGB")
logger.info("头像从 numpy array 转换为 PIL Image")
except Exception as e:
logger.warning("头像格式转换失败 (%s): %s", type(face_img).__name__, e)
# 如果 UI 没传有效头像,从本地文件加载
if face_image is None:
face_image = SDService.load_face_image()
if face_image is not None:
logger.info("换脸头像已就绪: %dx%d", face_image.width, face_image.height)
else:
logger.warning("换脸已启用但未找到有效头像")
persona = _resolve_persona(persona_text) if persona_text else None
images = svc.txt2img(
prompt=prompt,
negative_prompt=neg_prompt,
model=model,
steps=int(steps),
cfg_scale=float(cfg_scale),
face_image=face_image,
quality_mode=quality_mode,
persona=persona,
)
preset = get_sd_preset(quality_mode)
swap_hint = " (已换脸)" if face_image else ""
return images, images, f"✅ 生成 {len(images)} 张图片{swap_hint} [{quality_mode}]"
except Exception as e:
logger.error("图片生成失败: %s", e)
return None, [], f"❌ 绘图失败: {e}"
def one_click_export(title, content, images):
"""导出文案和图片到本地"""
if not title:
return "❌ 无法导出:没有标题"
safe_title = re.sub(r'[\\/*?:"<>|]', "", title)[:20]
folder_name = f"{int(time.time())}_{safe_title}"
folder_path = os.path.join(OUTPUT_DIR, folder_name)
os.makedirs(folder_path, exist_ok=True)
with open(os.path.join(folder_path, "文案.txt"), "w", encoding="utf-8") as f:
f.write(f"{title}\n\n{content}")
saved_paths = []
if images:
for idx, img in enumerate(images):
path = os.path.join(folder_path, f"{idx+1}.jpg")
if isinstance(img, Image.Image):
if img.mode != "RGB":
img = img.convert("RGB")
img.save(path, format="JPEG", quality=95)
saved_paths.append(os.path.abspath(path))
# 尝试打开文件夹
try:
abs_path = os.path.abspath(folder_path)
if platform.system() == "Windows":
os.startfile(abs_path)
elif platform.system() == "Darwin":
subprocess.call(["open", abs_path])
else:
subprocess.call(["xdg-open", abs_path])
except Exception:
pass
return f"✅ 已导出至: {folder_path} ({len(saved_paths)} 张图片)"
def publish_to_xhs(title, content, tags_str, images, local_images, mcp_url, schedule_time):
"""通过 MCP 发布到小红书(含输入校验和临时文件自动清理)"""
# === 发布前校验 ===
if not title:
return "❌ 缺少标题"
if len(title) > 20:
return f"❌ 标题超长:当前 {len(title)} 字,小红书限制 ≤20 字,请精简后再发布"
client = get_mcp_client(mcp_url)
ai_temp_files: list = [] # 追踪本次写入的临时文件,用于 finally 清理
try:
# 收集图片路径
image_paths = []
# 先保存 AI 生成的图片到临时目录
if images:
temp_dir = os.path.join(OUTPUT_DIR, "_temp_publish")
os.makedirs(temp_dir, exist_ok=True)
for idx, img in enumerate(images):
if isinstance(img, Image.Image):
path = os.path.abspath(os.path.join(temp_dir, f"ai_{idx}.jpg"))
if img.mode != "RGB":
img = img.convert("RGB")
img.save(path, format="JPEG", quality=95)
image_paths.append(path)
ai_temp_files.append(path) # 登记临时文件
# 添加本地上传的图片
if local_images:
for img_file in local_images:
img_path = img_file.name if hasattr(img_file, 'name') else str(img_file)
if os.path.exists(img_path):
image_paths.append(os.path.abspath(img_path))
# === 图片校验 ===
if not image_paths:
return "❌ 至少需要 1 张图片才能发布"
if len(image_paths) > 18:
return f"❌ 图片数量超限:当前 {len(image_paths)} 张,小红书限制 ≤18 张,请减少图片"
for p in image_paths:
if not os.path.exists(p):
return f"❌ 图片文件不存在:{p}"
# 解析标签
tags = [t.strip().lstrip("#") for t in tags_str.split(",") if t.strip()] if tags_str else None
# 定时发布
schedule = schedule_time if schedule_time and schedule_time.strip() else None
result = client.publish_content(
title=title,
content=content,
images=image_paths,
tags=tags,
schedule_at=schedule,
)
if "error" in result:
return f"❌ 发布失败: {result['error']}"
return f"✅ 发布成功!\n{result.get('text', '')}"
except Exception as e:
logger.error("发布失败: %s", e)
return f"❌ 发布异常: {e}"
finally:
# 清理本次写入的 AI 临时图片(无论成功/失败)
for tmp_path in ai_temp_files:
try:
if os.path.exists(tmp_path):
os.remove(tmp_path)
except OSError as cleanup_err:
logger.warning("临时文件清理失败 %s: %s", tmp_path, cleanup_err)

197
services/engagement.py Normal file
View File

@ -0,0 +1,197 @@
"""
services/engagement.py
评论管家手动评论回复笔记加载等互动功能
"""
import logging
import gradio as gr
from mcp_client import get_mcp_client
from llm_service import LLMService
from services.connection import _get_llm_config
from services.hotspot import _pick_from_cache, _set_cache, _get_cache
logger = logging.getLogger("autobot")
def load_note_for_comment(feed_id, xsec_token, mcp_url):
"""加载目标笔记详情 (标题+正文+已有评论), 用于 AI 分析"""
if not feed_id or not xsec_token:
return "❌ 请先选择笔记", "", "", ""
try:
client = get_mcp_client(mcp_url)
result = client.get_feed_detail(feed_id, xsec_token, load_all_comments=True)
if "error" in result:
return f"{result['error']}", "", "", ""
full_text = result.get("text", "")
# 尝试分离正文和评论
if "评论" in full_text:
parts = full_text.split("评论", 1)
content_part = parts[0].strip()
comments_part = "评论" + parts[1] if len(parts) > 1 else ""
else:
content_part = full_text[:500]
comments_part = ""
return "✅ 笔记内容已加载", content_part[:800], comments_part[:1500], full_text
except Exception as e:
return f"{e}", "", "", ""
def ai_generate_comment(model, persona,
post_title, post_content, existing_comments):
"""AI 生成主动评论"""
persona = _resolve_persona(persona)
api_key, base_url, _ = _get_llm_config()
if not api_key:
return "⚠️ 请先配置 LLM 提供商", "❌ LLM 未配置"
if not model:
return "⚠️ 请先连接 LLM", "❌ 未选模型"
if not post_title and not post_content:
return "⚠️ 请先加载笔记内容", "❌ 无笔记内容"
try:
svc = LLMService(api_key, base_url, model)
comment = svc.generate_proactive_comment(
persona, post_title, post_content[:600], existing_comments[:800]
)
return comment, "✅ 评论已生成"
except Exception as e:
logger.error(f"AI 评论生成失败: {e}")
return f"生成失败: {e}", f"{e}"
def send_comment(feed_id, xsec_token, comment_content, mcp_url):
"""发送评论到别人的笔记"""
if not all([feed_id, xsec_token, comment_content]):
return "❌ 缺少必要参数 (笔记ID / token / 评论内容)"
try:
client = get_mcp_client(mcp_url)
result = client.post_comment(feed_id, xsec_token, comment_content)
if "error" in result:
return f"{result['error']}"
return "✅ 评论已发送!"
except Exception as e:
return f"{e}"
# ---- 模块 B: 回复我的笔记评论 ----
def fetch_my_notes(mcp_url):
"""通过已保存的 userId 获取我的笔记列表"""
my_uid = cfg.get("my_user_id", "")
xsec = cfg.get("xsec_token", "")
if not my_uid:
return (
gr.update(choices=[], value=None),
"❌ 未配置用户 ID请先到「账号登录」页填写并保存",
)
if not xsec:
return (
gr.update(choices=[], value=None),
"❌ 未获取 xsec_token请先登录",
)
try:
client = get_mcp_client(mcp_url)
result = client.get_user_profile(my_uid, xsec)
if "error" in result:
return gr.update(choices=[], value=None), f"{result['error']}"
# 从 raw 中解析 feeds
raw = result.get("raw", {})
text = result.get("text", "")
data = None
if raw and isinstance(raw, dict):
for item in raw.get("content", []):
if item.get("type") == "text":
try:
data = json.loads(item["text"])
except (json.JSONDecodeError, KeyError):
pass
if not data:
try:
data = json.loads(text)
except (json.JSONDecodeError, TypeError):
pass
feeds = (data or {}).get("feeds") or []
if not feeds:
return (
gr.update(choices=[], value=None),
"⚠️ 未找到你的笔记,可能账号还没有发布内容",
)
entries = []
for f in feeds:
nc = f.get("noteCard") or {}
user = nc.get("user") or {}
interact = nc.get("interactInfo") or {}
entries.append({
"feed_id": f.get("id", ""),
"xsec_token": f.get("xsecToken", ""),
"title": nc.get("displayTitle", "未知标题"),
"author": user.get("nickname", user.get("nickName", "")),
"user_id": user.get("userId", ""),
"likes": interact.get("likedCount", "0"),
"type": nc.get("type", ""),
})
_set_cache("my_notes", entries)
choices = [
f"[{i+1}] {e['title'][:20]} | {e['type']} | ❤{e['likes']}"
for i, e in enumerate(entries)
]
return (
gr.update(choices=choices, value=choices[0] if choices else None),
f"✅ 找到 {len(entries)} 篇笔记",
)
except Exception as e:
return gr.update(choices=[], value=None), f"{e}"
def on_my_note_selected(selected):
return _pick_from_cache(selected, "my_notes")
def fetch_my_note_comments(feed_id, xsec_token, mcp_url):
"""获取我的笔记的评论列表"""
if not feed_id or not xsec_token:
return "❌ 请先选择笔记", ""
try:
client = get_mcp_client(mcp_url)
result = client.get_feed_detail(feed_id, xsec_token, load_all_comments=True)
if "error" in result:
return f"{result['error']}", ""
return "✅ 评论加载完成", result.get("text", "暂无评论")
except Exception as e:
return f"{e}", ""
def ai_reply_comment(model, persona, post_title, comment_text):
"""AI 生成评论回复"""
persona = _resolve_persona(persona)
api_key, base_url, _ = _get_llm_config()
if not api_key:
return "⚠️ 请先配置 LLM 提供商", "❌ LLM 未配置"
if not model:
return "⚠️ 请先连接 LLM 并选择模型", "❌ 未选择模型"
if not comment_text:
return "请输入需要回复的评论内容", "⚠️ 请输入评论"
try:
svc = LLMService(api_key, base_url, model)
reply = svc.generate_reply(persona, post_title, comment_text)
return reply, "✅ 回复已生成"
except Exception as e:
logger.error(f"AI 回复生成失败: {e}")
return f"生成失败: {e}", f"{e}"
def send_reply(feed_id, xsec_token, reply_content, mcp_url):
"""发送评论回复"""
if not all([feed_id, xsec_token, reply_content]):
return "❌ 缺少必要参数"
try:
client = get_mcp_client(mcp_url)
result = client.post_comment(feed_id, xsec_token, reply_content)
if "error" in result:
return f"❌ 回复失败: {result['error']}"
return "✅ 回复已发送"
except Exception as e:
return f"❌ 发送失败: {e}"

190
services/hotspot.py Normal file
View File

@ -0,0 +1,190 @@
"""
services/hotspot.py
热点探测热点生成笔记列表缓存供评论管家主动评论使用
"""
import threading
import logging
import gradio as gr
from llm_service import LLMService
from mcp_client import get_mcp_client
from services.connection import _get_llm_config
from services.persona import _resolve_persona
logger = logging.getLogger("autobot")
# ==================================================
# Tab 2: 热点探测
# ==================================================
def search_hotspots(keyword, sort_by, mcp_url):
"""搜索小红书热门内容"""
if not keyword:
return "❌ 请输入搜索关键词", ""
try:
client = get_mcp_client(mcp_url)
result = client.search_feeds(keyword, sort_by=sort_by)
if "error" in result:
return f"❌ 搜索失败: {result['error']}", ""
text = result.get("text", "无结果")
return "✅ 搜索完成", text
except Exception as e:
logger.error("热点搜索失败: %s", e)
return f"❌ 搜索失败: {e}", ""
def analyze_and_suggest(model, keyword, search_result):
"""AI 分析热点并给出建议"""
if not search_result:
return "❌ 请先搜索", "", ""
api_key, base_url, _ = _get_llm_config()
if not api_key:
return "❌ 请先配置 LLM 提供商", "", ""
try:
svc = LLMService(api_key, base_url, model)
analysis = svc.analyze_hotspots(search_result)
topics = "\n".join(f"{t}" for t in analysis.get("hot_topics", []))
patterns = "\n".join(f"{p}" for p in analysis.get("title_patterns", []))
suggestions = "\n".join(
f"**{s['topic']}** - {s['reason']}"
for s in analysis.get("suggestions", [])
)
structure = analysis.get("content_structure", "")
summary = (
f"## 🔥 热门选题\n{topics}\n\n"
f"## 📝 标题套路\n{patterns}\n\n"
f"## 📐 内容结构\n{structure}\n\n"
f"## 💡 推荐选题\n{suggestions}"
)
return "✅ 分析完成", summary, keyword
except Exception as e:
logger.error("热点分析失败: %s", e)
return f"❌ 分析失败: {e}", "", ""
def generate_from_hotspot(model, topic_from_hotspot, style, search_result, sd_model_name, persona_text):
"""基于热点分析生成文案(自动适配 SD 模型,支持人设)"""
if not topic_from_hotspot:
return "", "", "", "", "❌ 请先选择或输入选题"
api_key, base_url, _ = _get_llm_config()
if not api_key:
return "", "", "", "", "❌ 请先配置 LLM 提供商"
try:
svc = LLMService(api_key, base_url, model)
persona = _resolve_persona(persona_text) if persona_text else None
data = svc.generate_copy_with_reference(
topic=topic_from_hotspot,
style=style,
reference_notes=search_result[:2000],
sd_model_name=sd_model_name,
persona=persona,
)
tags = data.get("tags", [])
return (
data.get("title", ""),
data.get("content", ""),
data.get("sd_prompt", ""),
", ".join(tags),
"✅ 基于热点的文案已生成",
)
except Exception as e:
return "", "", "", "", f"❌ 生成失败: {e}"
# ==================================================
# Tab 3: 评论管家
# ==================================================
# ---- 共用: 笔记列表缓存(线程安全)----
# 主动评论缓存
_cached_proactive_entries: list[dict] = []
# 我的笔记评论缓存
_cached_my_note_entries: list[dict] = []
# 缓存互斥锁,防止并发回调产生竞态
_cache_lock = threading.RLock()
def _set_cache(name: str, entries: list):
"""线程安全地更新笔记列表缓存"""
global _cached_proactive_entries, _cached_my_note_entries
with _cache_lock:
if name == "proactive":
_cached_proactive_entries = list(entries)
else:
_cached_my_note_entries = list(entries)
def _get_cache(name: str) -> list:
"""线程安全地获取笔记列表缓存快照(返回副本)"""
with _cache_lock:
if name == "proactive":
return list(_cached_proactive_entries)
return list(_cached_my_note_entries)
def _fetch_and_cache(keyword, mcp_url, cache_name="proactive"):
"""通用: 获取笔记列表并线程安全地缓存"""
try:
client = get_mcp_client(mcp_url)
if keyword and keyword.strip():
entries = client.search_feeds_parsed(keyword.strip())
src = f"搜索「{keyword.strip()}"
else:
entries = client.list_feeds_parsed()
src = "首页推荐"
_set_cache(cache_name, entries)
if not entries:
return gr.update(choices=[], value=None), f"⚠️ 从{src}未找到笔记"
choices = []
for i, e in enumerate(entries):
title_short = (e["title"] or "无标题")[:28]
label = f"[{i+1}] {title_short} | @{e['author'] or '未知'} | ❤ {e['likes']}"
choices.append(label)
return (
gr.update(choices=choices, value=choices[0]),
f"✅ 从{src}获取 {len(entries)} 条笔记",
)
except Exception as e:
_set_cache(cache_name, [])
return gr.update(choices=[], value=None), f"{e}"
def _pick_from_cache(selected, cache_name="proactive"):
"""通用: 从缓存中提取选中条目的 feed_id / xsec_token / title线程安全快照"""
cache = _get_cache(cache_name)
if not selected or not cache:
return "", "", ""
try:
# 尝试从 [N] 前缀提取序号
idx = int(selected.split("]")[0].replace("[", "")) - 1
if 0 <= idx < len(cache):
e = cache[idx]
return e["feed_id"], e["xsec_token"], e.get("title", "")
except (ValueError, IndexError):
pass
# 回退: 模糊匹配标题
for e in cache:
if e.get("title", "")[:15] in selected:
return e["feed_id"], e["xsec_token"], e.get("title", "")
return "", "", ""
# ---- 模块 A: 主动评论他人 ----
def fetch_proactive_notes(keyword, mcp_url):
return _fetch_and_cache(keyword, mcp_url, "proactive")
def on_proactive_note_selected(selected):
return _pick_from_cache(selected, "proactive")

519
services/persona.py Normal file
View File

@ -0,0 +1,519 @@
"""
services/persona.py
人设管理常量关键词池主题池人设解析
"""
import random
import logging
from config_manager import ConfigManager
logger = logging.getLogger("autobot")
cfg = ConfigManager()
# ================= 人设池 =================
DEFAULT_PERSONAS = [
"赛博AI虚拟博主住在2077年的数码女孩用AI生成高颜值写真和全球场景打卡与粉丝超高频互动",
"性感福利主播,身材火辣衣着大胆,专注分享穿衣显身材和私房写真风穿搭",
"身材管理健身美女,热爱分享好身材秘诀和穿搭显身材技巧",
"温柔知性的时尚博主,喜欢分享日常穿搭和生活美学",
"元气满满的大学生,热爱探店和平价好物分享",
"30岁都市白领丽人专注通勤穿搭和职场干货",
"精致妈妈,分享育儿经验和家居收纳技巧",
"文艺青年摄影师,喜欢记录旅行和城市角落",
"健身达人营养师,专注减脂餐和运动分享",
"资深美妆博主,擅长化妆教程和护肤测评",
"独居女孩,分享租房改造和独居生活仪式感",
"甜品烘焙爱好者,热衷分享自制甜点和下午茶",
"数码科技女生专注好用App和电子产品测评",
"小镇姑娘在大城市打拼,分享省钱攻略和成长日记",
"中医养生爱好者,分享节气养生和食疗方子",
"二次元coser喜欢分享cos日常和动漫周边",
"北漂程序媛,分享高效工作法和解压生活",
"复古穿搭博主热爱vintage风和中古饰品",
"考研上岸学姐,分享学习方法和备考经验",
"新手养猫人,记录和毛孩子的日常生活",
"咖啡重度爱好者,探遍城市独立咖啡馆",
"极简主义生活家,倡导断舍离和高质量生活",
"汉服爱好者,分享传统文化和国风穿搭",
"插画师小姐姐,分享手绘过程和创作灵感",
"海归女孩,分享中西文化差异和海外生活见闻",
"瑜伽老师,分享身心灵修行和自律生活",
"美甲设计师,分享流行甲型和美甲教程",
"家居软装设计师,分享小户型改造和氛围感布置",
]
RANDOM_PERSONA_LABEL = "🎲 随机人设(每次自动切换)"
# ================= 人设 → 分类关键词/主题池映射 =================
# 每个人设对应一组相符的评论关键词和主题,切换人设时自动同步
PERSONA_POOL_MAP = {
# ---- 性感福利类 ----
"性感福利主播": {
"topics": [
"辣妹穿搭", "内衣测评", "比基尼穿搭", "私房写真风穿搭", "吊带裙穿搭",
"低胸穿搭", "紧身连衣裙", "蕾丝穿搭", "泳衣测评", "居家睡衣穿搭",
"露背装穿搭", "热裤穿搭", "性感御姐穿搭", "渔网袜穿搭", "包臀裙穿搭",
"锁骨链饰品", "身材展示", "好身材日常", "氛围感私房照", "海边度假穿搭",
],
"keywords": [
"辣妹", "性感穿搭", "内衣", "比基尼", "吊带", "低胸",
"紧身", "蕾丝", "泳衣", "睡衣", "露背", "热裤",
"御姐", "好身材", "包臀裙", "身材展示", "私房", "氛围感",
],
},
# ---- 身材管理类 ----
"身材管理健身美女": {
"topics": [
"好身材穿搭", "显身材穿搭", "马甲线养成", "翘臀训练", "直角肩养成",
"天鹅颈锻炼", "小蛮腰秘诀", "腿型矫正", "体态管理", "维密身材",
"居家塑形", "健身穿搭", "运动内衣测评", "蜜桃臀训练", "锁骨养成",
"紧身穿搭", "比基尼身材", "纤腰丰臀", "身材对比照", "自律打卡",
],
"keywords": [
"身材", "好身材", "马甲线", "翘臀", "直角肩", "天鹅颈",
"小蛮腰", "健身女孩", "塑形", "体态", "蜜桃臀", "腰臀比",
"紧身", "显身材", "维密", "锁骨", "A4腰", "漫画腿",
],
},
# ---- 时尚穿搭类 ----
"温柔知性的时尚博主": {
"topics": [
"春季穿搭", "通勤穿搭", "约会穿搭", "显瘦穿搭", "法式穿搭",
"极简穿搭", "氛围感穿搭", "一衣多穿", "秋冬叠穿", "夏日清凉穿搭",
"生活美学", "衣橱整理", "配色技巧", "基础款穿搭", "轻熟风穿搭",
],
"keywords": [
"穿搭", "ootd", "早春穿搭", "通勤穿搭", "显瘦", "法式穿搭",
"极简风", "氛围感", "轻熟风", "高级感穿搭", "配色",
],
},
"元气满满的大学生": {
"topics": [
"学生党穿搭", "宿舍美食", "平价好物", "校园生活", "学生党护肤",
"期末复习", "社团活动", "寝室改造", "奶茶测评", "拍照打卡地",
"一人食食谱", "考研经验", "实习经验", "省钱攻略",
],
"keywords": [
"学生党", "平价好物", "宿舍", "校园", "奶茶", "探店",
"拍照", "省钱", "大学生活", "期末", "开学", "室友",
],
},
"30岁都市白领丽人": {
"topics": [
"通勤穿搭", "职场干货", "面试技巧", "简历优化", "时间管理",
"理财入门", "轻熟风穿搭", "职场妆容", "咖啡探店", "高效工作法",
"副业分享", "自律生活", "下班后充电", "职场人际关系",
],
"keywords": [
"通勤穿搭", "职场", "面试", "理财", "自律", "高效",
"咖啡", "轻熟", "白领", "上班族", "时间管理", "副业",
],
},
"精致妈妈": {
"topics": [
"育儿经验", "家居收纳", "辅食制作", "亲子游", "母婴好物",
"宝宝穿搭", "早教启蒙", "产后恢复", "家常菜做法", "小户型收纳",
"家庭教育", "孕期护理", "宝宝辅食", "妈妈穿搭",
],
"keywords": [
"育儿", "收纳", "辅食", "母婴", "亲子", "早教",
"宝宝", "家居", "待产", "产后", "妈妈", "家常菜",
],
},
"文艺青年摄影师": {
"topics": [
"旅行攻略", "小众旅行地", "拍照打卡地", "城市citywalk", "古镇旅行",
"手机摄影技巧", "胶片摄影", "人像摄影", "风光摄影", "街拍",
"咖啡探店", "文艺书店", "展览打卡", "独立书店",
],
"keywords": [
"旅行", "摄影", "打卡", "citywalk", "胶片", "拍照",
"小众", "展览", "文艺", "街拍", "风光", "人像",
],
},
"健身达人营养师": {
"topics": [
"减脂餐分享", "居家健身", "帕梅拉跟练", "跑步入门", "体态矫正",
"增肌餐", "蛋白质补充", "运动穿搭", "健身房攻略", "马甲线养成",
"热量计算", "健康早餐", "运动恢复", "减脂食谱",
],
"keywords": [
"减脂", "健身", "减脂餐", "蛋白质", "体态", "马甲线",
"帕梅拉", "跑步", "热量", "增肌", "运动", "健康餐",
],
},
"资深美妆博主": {
"topics": [
"妆容教程", "眼妆教程", "唇妆合集", "底妆测评", "护肤心得",
"防晒测评", "学生党平价护肤", "敏感肌护肤", "美白攻略",
"成分党护肤", "换季护肤", "早C晚A护肤", "抗老护肤",
],
"keywords": [
"护肤", "化妆教程", "眼影", "口红", "底妆", "防晒",
"美白", "敏感肌", "成分", "平价", "测评", "粉底",
],
},
"独居女孩": {
"topics": [
"独居生活", "租房改造", "氛围感房间", "一人食食谱", "好物分享",
"香薰推荐", "居家好物", "断舍离", "仪式感生活", "独居安全",
"解压方式", "emo急救指南", "桌面布置", "小户型装修",
],
"keywords": [
"独居", "租房改造", "好物", "氛围感", "一人食", "仪式感",
"解压", "居家", "香薰", "ins风", "房间", "断舍离",
],
},
"甜品烘焙爱好者": {
"topics": [
"烘焙教程", "0失败甜品", "下午茶推荐", "蛋糕教程", "面包制作",
"饼干烘焙", "奶油裱花", "巧克力甜品", "网红甜品", "便当制作",
"早餐食谱", "咖啡配甜品", "节日甜品", "低卡甜品",
],
"keywords": [
"烘焙", "甜品", "蛋糕", "面包", "下午茶", "曲奇",
"裱花", "抹茶", "巧克力", "奶油", "食谱", "烤箱",
],
},
"数码科技女生": {
"topics": [
"iPad生产力", "手机摄影技巧", "好用App推荐", "电子产品测评",
"桌面布置", "数码好物", "耳机测评", "平板学习", "生产力工具",
"手机壳推荐", "充电设备", "智能家居",
],
"keywords": [
"iPad", "App推荐", "数码", "测评", "手机", "耳机",
"桌面", "科技", "电子产品", "平板", "生产力", "充电",
],
},
"小镇姑娘在大城市打拼": {
"topics": [
"省钱攻略", "成长日记", "平价好物", "租房改造", "副业分享",
"理财入门", "独居生活", "面试技巧", "通勤穿搭", "自律生活",
"城市生存指南", "女性成长", "攒钱计划",
],
"keywords": [
"省钱", "平价", "租房", "副业", "理财", "成长",
"自律", "打工", "攒钱", "面试", "独居", "北漂",
],
},
"中医养生爱好者": {
"topics": [
"节气养生", "食疗方子", "泡脚养生", "体质调理", "艾灸",
"中药茶饮", "作息调整", "经络按摩", "养胃食谱", "祛湿方法",
"睡眠改善", "女性调理", "养生汤", "二十四节气",
],
"keywords": [
"养生", "食疗", "泡脚", "中医", "艾灸", "祛湿",
"节气", "体质", "养胃", "经络", "调理", "药膳",
],
},
"二次元coser": {
"topics": [
"cos日常", "动漫周边", "漫展攻略", "cos化妆教程", "假发造型",
"lolita穿搭", "二次元好物", "手办收藏", "动漫推荐", "cos道具制作",
"jk穿搭", "谷子收藏", "二次元摄影",
],
"keywords": [
"cos", "动漫", "二次元", "漫展", "lolita", "手办",
"jk", "假发", "谷子", "周边", "番剧", "coser",
],
},
"北漂程序媛": {
"topics": [
"高效工作法", "程序员日常", "好用App推荐", "副业分享", "自律生活",
"时间管理", "iPad生产力", "解压方式", "通勤穿搭", "理财入门",
"独居生活", "技术学习", "面试经验", "桌面布置",
],
"keywords": [
"程序员", "高效", "App推荐", "自律", "副业", "iPad",
"技术", "工作", "北漂", "面试", "代码", "桌面",
],
},
"复古穿搭博主": {
"topics": [
"vintage风穿搭", "中古饰品", "复古妆容", "二手vintage", "古着穿搭",
"法式穿搭", "复古包包", "跳蚤市场", "旧物改造", "港风穿搭",
"文艺穿搭", "配饰搭配", "vintage探店",
],
"keywords": [
"vintage", "复古", "中古", "古着", "港风", "法式",
"饰品", "二手", "旧物", "跳蚤市场", "复古穿搭", "文艺",
],
},
"考研上岸学姐": {
"topics": [
"考研经验", "英语学习方法", "书单推荐", "时间管理", "自律生活",
"考研择校", "政治复习", "数学刷题", "考研英语", "复试经验",
"专业课复习", "考研心态", "背诵技巧", "刷题方法",
],
"keywords": [
"考研", "英语学习", "书单", "自律", "学习方法", "上岸",
"刷题", "备考", "复习", "笔记", "时间管理", "择校",
],
},
"新手养猫人": {
"topics": [
"养猫日常", "猫粮测评", "猫咪用品", "新手养宠指南", "猫咪健康",
"猫咪行为", "驱虫攻略", "猫砂测评", "猫玩具推荐", "猫咪拍照",
"多猫家庭", "领养代替购买", "猫咪绝育",
],
"keywords": [
"养猫", "猫粮", "猫咪", "宠物", "猫砂", "驱虫",
"铲屎官", "喵喵", "猫玩具", "猫零食", "新手养猫", "猫咪日常",
],
},
"咖啡重度爱好者": {
"topics": [
"咖啡探店", "手冲咖啡", "咖啡豆推荐", "咖啡器具", "拿铁艺术",
"家庭咖啡", "咖啡配甜品", "独立咖啡馆", "冷萃咖啡", "咖啡知识",
"意式咖啡", "探店打卡", "咖啡拉花",
],
"keywords": [
"咖啡", "手冲", "拿铁", "探店", "咖啡豆", "美式",
"咖啡馆", "意式", "冷萃", "拉花", "咖啡器具", "独立咖啡馆",
],
},
"极简主义生活家": {
"topics": [
"断舍离", "极简生活", "收纳技巧", "高质量生活", "减法生活",
"胶囊衣橱", "极简护肤", "环保生活", "数字断舍离", "极简穿搭",
"极简房间", "消费降级", "物欲管理",
],
"keywords": [
"断舍离", "极简", "收纳", "高质量", "减法", "胶囊衣橱",
"简约", "环保", "整理", "少即是多", "极简风", "质感",
],
},
"汉服爱好者": {
"topics": [
"汉服穿搭", "国风穿搭", "传统文化", "汉服发型", "汉服配饰",
"汉服拍照", "古风妆容", "汉服日常", "汉服科普", "形制科普",
"古风摄影", "新中式穿搭", "汉服探店",
],
"keywords": [
"汉服", "国风", "传统文化", "古风", "新中式", "形制",
"发簪", "明制", "宋制", "唐制", "汉服日常", "古风摄影",
],
},
"插画师小姐姐": {
"topics": [
"手绘教程", "创作灵感", "iPad绘画", "插画分享", "水彩教程",
"Procreate技巧", "配色方案", "角色设计", "头像绘制", "手账素材",
"接稿经验", "画师日常", "绘画工具推荐",
],
"keywords": [
"插画", "手绘", "Procreate", "画画", "iPad绘画", "水彩",
"配色", "创作", "画师", "手账", "教程", "素材",
],
},
"海归女孩": {
"topics": [
"中西文化差异", "海外生活", "留学经验", "英语学习方法", "海归求职",
"旅行攻略", "异国美食", "海外好物", "文化冲击", "语言学习",
"签证攻略", "海归适应", "国外探店",
],
"keywords": [
"留学", "海归", "英语", "海外", "文化差异", "旅行",
"异国", "签证", "语言", "出国", "求职", "国外",
],
},
"瑜伽老师": {
"topics": [
"瑜伽入门", "冥想练习", "体态矫正", "呼吸法", "居家瑜伽",
"拉伸教程", "肩颈放松", "瑜伽体式", "自律生活", "身心灵",
"瑜伽穿搭", "晨练瑜伽", "睡前瑜伽",
],
"keywords": [
"瑜伽", "冥想", "体态", "拉伸", "放松", "呼吸",
"柔韧", "健康", "自律", "晨练", "入门", "体式",
],
},
"美甲设计师": {
"topics": [
"美甲教程", "流行甲型", "美甲合集", "简约美甲", "法式美甲",
"手绘美甲", "季节美甲", "显白美甲", "美甲配色", "短甲美甲",
"新娘美甲", "美甲工具推荐", "日式美甲",
],
"keywords": [
"美甲", "甲型", "法式美甲", "手绘", "显白", "短甲",
"指甲", "美甲教程", "配色", "日式美甲", "腮红甲", "猫眼甲",
],
},
"家居软装设计师": {
"topics": [
"小户型改造", "氛围感布置", "软装搭配", "家居好物", "收纳技巧",
"客厅布置", "卧室改造", "灯光设计", "绿植布置", "装修避坑",
"北欧风格", "ins风家居", "墙面装饰",
],
"keywords": [
"家居", "软装", "改造", "收纳", "氛围感", "小户型",
"装修", "灯光", "绿植", "北欧", "ins风", "布置",
],
},
# ---- 赛博/AI 虚拟博主类 ----
"赛博AI虚拟博主": {
"topics": [
"AI女孩日常", "虚拟人物写真", "AI生成美女", "赛博朴克穿搭", "未来风穿搭",
"全球场景打卡", "巴黎打卡写真", "东京街头拍照", "外太空写真", "古风仙侠写真",
"AI换装挑战", "粉丝许愿穿搭", "二次元风格写真", "女仆装写真", "护士制服写真",
"校园制服写真", "婚纱写真", "水下写真", "AI绘画教程", "虚拟人物背后故事",
],
"keywords": [
"AI女孩", "AI美女", "虚拟人物", "赛博朴克", "AI绘画", "AI写真",
"数码女孩", "2077", "未来风", "场景切换", "换装挑战", "粉丝许愿",
"高颜值", "特写", "全球打卡", "制服写真", "AI创作", "互动",
],
},
}
# 为"随机人设"使用的全量池(兼容旧逻辑)
DEFAULT_TOPICS = [
# 穿搭类
"春季穿搭", "通勤穿搭", "约会穿搭", "显瘦穿搭", "小个子穿搭",
"学生党穿搭", "韩系穿搭", "日系穿搭", "法式穿搭", "极简穿搭",
"国风穿搭", "运动穿搭", "闺蜜穿搭", "梨形身材穿搭", "微胖穿搭",
"氛围感穿搭", "一衣多穿", "秋冬叠穿", "夏日清凉穿搭",
# 美妆护肤类
"护肤心得", "妆容教程", "学生党平价护肤", "敏感肌护肤",
"抗老护肤", "美白攻略", "眼妆教程", "唇妆合集", "底妆测评",
"防晒测评", "早C晚A护肤", "成分党护肤", "换季护肤",
# 美食类
"减脂餐分享", "一人食食谱", "宿舍美食", "烘焙教程", "家常菜做法",
"探店打卡", "咖啡探店", "早餐食谱", "下午茶推荐", "火锅推荐",
"奶茶测评", "便当制作", "0失败甜品",
# 生活家居类
"好物分享", "平价好物", "居家好物", "收纳技巧", "租房改造",
"小户型装修", "氛围感房间", "香薰推荐", "桌面布置", "断舍离",
# 旅行出行类
"旅行攻略", "周末去哪玩", "小众旅行地", "拍照打卡地", "露营攻略",
"自驾游攻略", "古镇旅行", "海岛度假", "城市citywalk",
# 学习成长类
"书单推荐", "自律生活", "时间管理", "考研经验", "英语学习方法",
"理财入门", "副业分享", "简历优化", "面试技巧",
# 数码科技类
"iPad生产力", "手机摄影技巧", "好用App推荐", "电子产品测评",
# 健身运动类
"居家健身", "帕梅拉跟练", "跑步入门", "瑜伽入门", "体态矫正",
# 宠物类
"养猫日常", "养狗经验", "宠物好物", "新手养宠指南",
# 情感心理类
"独居生活", "emo急救指南", "社恐自救", "女性成长", "情绪管理",
]
DEFAULT_STYLES = [
"好物种草", "干货教程", "情绪共鸣", "生活Vlog", "测评避雷",
"知识科普", "经验分享", "清单合集", "对比测评", "沉浸式体验",
]
# 全量评论关键词池(兼容旧逻辑 / 随机人设)
DEFAULT_COMMENT_KEYWORDS = [
# 穿搭时尚
"穿搭", "ootd", "早春穿搭", "通勤穿搭", "显瘦", "小个子穿搭",
# 美妆护肤
"护肤", "化妆教程", "平价护肤", "防晒", "美白", "眼影",
# 美食
"美食", "减脂餐", "探店", "咖啡", "烘焙", "食谱",
# 生活好物
"好物推荐", "平价好物", "居家好物", "收纳", "租房改造",
# 旅行
"旅行", "攻略", "打卡", "周末去哪玩", "露营",
# 学习成长
"自律", "书单", "考研", "英语学习", "副业",
# 生活日常
"生活日常", "独居", "vlog", "仪式感", "解压",
# 健身
"减脂", "健身", "瑜伽", "体态",
# 宠物
"养猫", "养狗", "宠物",
]
def _match_persona_pools(persona_text: str) -> dict | None:
"""根据人设文本模糊匹配对应的关键词池和主题池
返回 {"topics": [...], "keywords": [...]} None未匹配
"""
if not persona_text or persona_text == RANDOM_PERSONA_LABEL:
return None
# 精确匹配
for key, pools in PERSONA_POOL_MAP.items():
if key in persona_text or persona_text in key:
return pools
# 关键词模糊匹配
_CATEGORY_HINTS = {
"时尚|穿搭|搭配|衣服": "温柔知性的时尚博主",
"大学|学生|校园": "元气满满的大学生",
"白领|职场|通勤|上班": "30岁都市白领丽人",
"妈妈|育儿|宝宝|母婴": "精致妈妈",
"摄影|旅行|旅游|文艺": "文艺青年摄影师",
"健身|运动|减脂|增肌|营养": "健身达人营养师",
"美妆|化妆|护肤|美白": "资深美妆博主",
"独居|租房|一人": "独居女孩",
"烘焙|甜品|蛋糕|面包": "甜品烘焙爱好者",
"数码|科技|App|电子": "数码科技女生",
"小镇|打拼|省钱|攒钱": "小镇姑娘在大城市打拼",
"中医|养生|食疗|节气": "中医养生爱好者",
"二次元|cos|动漫|漫展": "二次元coser",
"程序|代码|开发|码农": "北漂程序媛",
"复古|vintage|中古|古着": "复古穿搭博主",
"考研|备考|上岸|学习方法": "考研上岸学姐",
"猫|铲屎|喵": "新手养猫人",
"咖啡|手冲|拿铁": "咖啡重度爱好者",
"极简|断舍离|简约": "极简主义生活家",
"汉服|国风|传统文化": "汉服爱好者",
"插画|手绘|画画|绘画": "插画师小姐姐",
"海归|留学|海外": "海归女孩",
"瑜伽|冥想|身心灵": "瑜伽老师",
"美甲|甲型|指甲": "美甲设计师",
"家居|软装|装修|改造": "家居软装设计师",
}
for hints, persona_key in _CATEGORY_HINTS.items():
if any(h in persona_text for h in hints.split("|")):
return PERSONA_POOL_MAP.get(persona_key)
return None
def get_persona_topics(persona_text: str) -> list[str]:
"""获取人设对应的主题池,未匹配则返回全量池"""
pools = _match_persona_pools(persona_text)
return pools["topics"] if pools else DEFAULT_TOPICS
def get_persona_keywords(persona_text: str) -> list[str]:
"""获取人设对应的评论关键词池,未匹配则返回全量池"""
pools = _match_persona_pools(persona_text)
return pools["keywords"] if pools else DEFAULT_COMMENT_KEYWORDS
def on_persona_changed(persona_text: str):
"""人设切换时联动更新评论关键词池、主题池、队列主题池,并保存到配置"""
# 保存人设到配置
cfg.set("persona", persona_text)
# 更新关键词和主题池
keywords = get_persona_keywords(persona_text)
topics = get_persona_topics(persona_text)
keywords_str = ", ".join(keywords)
topics_str = ", ".join(topics)
matched = _match_persona_pools(persona_text)
if matched:
label = persona_text[:15] if len(persona_text) > 15 else persona_text
hint = f"✅ 已切换至「{label}」专属关键词/主题池"
else:
hint = " 使用通用全量关键词/主题池"
# 返回:自动运营的关键词池、主题池、提示信息、队列主题池
return keywords_str, topics_str, hint, topics_str
def _resolve_persona(persona_text: str, log_fn=None) -> str:
"""解析人设:如果是随机人设则从池中随机选一个,否则原样返回"""
if not persona_text or persona_text == RANDOM_PERSONA_LABEL:
chosen = random.choice(DEFAULT_PERSONAS)
log_fn and log_fn(f"🎭 本次人设: {chosen[:20]}...")
return chosen
# 检查是否选的是池中某个人设Dropdown选中
return persona_text

176
services/profile.py Normal file
View File

@ -0,0 +1,176 @@
"""
services/profile.py
小红书账号 Profile 解析与可视化函数
"""
import re
import json
import logging
import matplotlib
import matplotlib.pyplot as plt
from mcp_client import get_mcp_client
_font_candidates = ["Microsoft YaHei", "SimHei", "PingFang SC", "WenQuanYi Micro Hei"]
for _fn in _font_candidates:
try:
matplotlib.font_manager.findfont(_fn, fallback_to_default=False)
plt.rcParams["font.sans-serif"] = [_fn]
break
except Exception:
continue
plt.rcParams["axes.unicode_minus"] = False
logger = logging.getLogger("autobot")
# ==================================================
# Tab 4: 数据看板 (我的账号)
# ==================================================
def _parse_profile_json(text: str):
"""尝试从文本中解析用户 profile JSON"""
if not text:
return None
# 直接 JSON
try:
return json.loads(text)
except (json.JSONDecodeError, TypeError):
pass
# 可能包含 Markdown 代码块
m = re.search(r'```(?:json)?\s*\n([\s\S]+?)\n```', text)
if m:
try:
return json.loads(m.group(1))
except (json.JSONDecodeError, TypeError):
pass
return None
def _parse_count(val) -> float:
"""解析数字字符串, 支持 '1.2万' 格式"""
if isinstance(val, (int, float)):
return float(val)
s = str(val).strip()
if "" in s:
try:
return float(s.replace("", "")) * 10000
except ValueError:
pass
try:
return float(s)
except ValueError:
return 0.0
def fetch_my_profile(user_id, xsec_token, mcp_url):
"""获取我的账号数据, 返回结构化信息 + 可视化图表"""
if not user_id or not xsec_token:
return "❌ 请填写你的用户 ID 和 xsec_token", "", None, None, None
try:
client = get_mcp_client(mcp_url)
result = client.get_user_profile(user_id, xsec_token)
if "error" in result:
return f"{result['error']}", "", None, None, None
raw = result.get("raw", {})
text = result.get("text", "")
# 尝试从 raw 或 text 解析 JSON
data = None
if raw and isinstance(raw, dict):
content_list = raw.get("content", [])
for item in content_list:
if item.get("type") == "text":
data = _parse_profile_json(item.get("text", ""))
if data:
break
if not data:
data = _parse_profile_json(text)
if not data:
return "✅ 数据加载完成 (纯文本)", text, None, None, None
# ---- 提取基本信息 (注意 MCP 对新号可能返回 null) ----
basic = data.get("userBasicInfo") or {}
interactions = data.get("interactions") or []
feeds = data.get("feeds") or []
gender_map = {0: "未知", 1: "", 2: ""}
info_lines = [
f"## 👤 {basic.get('nickname', '未知')}",
f"- **小红书号**: {basic.get('redId', '-')}",
f"- **性别**: {gender_map.get(basic.get('gender', 0), '未知')}",
f"- **IP 属地**: {basic.get('ipLocation', '-')}",
f"- **简介**: {basic.get('desc', '-')}",
"",
"### 📊 核心数据",
]
for inter in interactions:
info_lines.append(f"- **{inter.get('name', '')}**: {inter.get('count', '0')}")
info_lines.append(f"\n### 📝 展示笔记: {len(feeds)}")
profile_md = "\n".join(info_lines)
# ---- 互动数据柱状图 ----
fig_interact = None
if interactions:
inter_data = {i["name"]: _parse_count(i["count"]) for i in interactions}
fig_interact, ax = plt.subplots(figsize=(4, 3), dpi=100)
labels = list(inter_data.keys())
values = list(inter_data.values())
colors = ["#FF6B6B", "#4ECDC4", "#45B7D1"][:len(labels)]
ax.bar(labels, values, color=colors, edgecolor="white", linewidth=0.5)
ax.set_title("账号核心指标", fontsize=12, fontweight="bold")
for i, v in enumerate(values):
display = f"{v/10000:.1f}" if v >= 10000 else str(int(v))
ax.text(i, v + max(values) * 0.02, display, ha="center", fontsize=9)
ax.set_ylabel("")
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
fig_interact.tight_layout()
# ---- 笔记点赞分布图 ----
fig_notes = None
if feeds:
titles, likes = [], []
for f in feeds[:15]:
nc = f.get("noteCard") or {}
t = (nc.get("displayTitle", "") or "无标题")[:12]
lk = _parse_count((nc.get("interactInfo") or {}).get("likedCount", "0"))
titles.append(t)
likes.append(lk)
fig_notes, ax2 = plt.subplots(figsize=(7, 3.5), dpi=100)
ax2.barh(range(len(titles)), likes, color="#FF6B6B", edgecolor="white")
ax2.set_yticks(range(len(titles)))
ax2.set_yticklabels(titles, fontsize=8)
ax2.set_title(f"笔记点赞排行 (Top {len(titles)})", fontsize=12, fontweight="bold")
ax2.invert_yaxis()
for i, v in enumerate(likes):
display = f"{v/10000:.1f}" if v >= 10000 else str(int(v))
ax2.text(v + max(likes) * 0.01 if max(likes) > 0 else 0, i, display, va="center", fontsize=8)
ax2.spines["top"].set_visible(False)
ax2.spines["right"].set_visible(False)
fig_notes.tight_layout()
# ---- 笔记详情表格 (Markdown) ----
table_lines = [
"### 📋 笔记数据明细",
"| # | 标题 | 类型 | ❤ 点赞 |",
"|---|------|------|--------|",
]
for i, f in enumerate(feeds):
nc = f.get("noteCard") or {}
t = (nc.get("displayTitle", "") or "无标题")[:25]
tp = "📹 视频" if nc.get("type") == "video" else "📷 图文"
lk = (nc.get("interactInfo") or {}).get("likedCount", "0")
table_lines.append(f"| {i+1} | {t} | {tp} | {lk} |")
notes_table = "\n".join(table_lines)
return "✅ 数据加载完成", profile_md, fig_interact, fig_notes, notes_table
except Exception as e:
logger.error(f"获取我的数据失败: {e}")
return f"{e}", "", None, None, None

352
services/queue_ops.py Normal file
View File

@ -0,0 +1,352 @@
"""
services/queue_ops.py
发布队列操作生成入队状态管理发布控制
"""
import os
import time
import logging
from config_manager import ConfigManager, OUTPUT_DIR
from publish_queue import (
PublishQueue, QueuePublisher,
STATUS_DRAFT, STATUS_APPROVED, STATUS_SCHEDULED, STATUS_PUBLISHING,
STATUS_PUBLISHED, STATUS_FAILED, STATUS_REJECTED, STATUS_LABELS,
)
from mcp_client import get_mcp_client
from services.connection import _get_llm_config
from services.persona import DEFAULT_TOPICS, DEFAULT_STYLES, _resolve_persona
from services.content import generate_copy, generate_images
from services.rate_limiter import _increment_stat, _clear_error_streak
cfg = ConfigManager()
logger = logging.getLogger("autobot")
# 模块级依赖(通过 configure() 注入)
_pub_queue: "PublishQueue | None" = None
_queue_publisher: "QueuePublisher | None" = None
_analytics = None
_log_fn = None
def configure(pub_queue, queue_publisher, analytics_svc, log_fn=None):
"""从 main.py 初始化段注入队列和分析服务"""
global _pub_queue, _queue_publisher, _analytics, _log_fn
_pub_queue = pub_queue
_queue_publisher = queue_publisher
_analytics = analytics_svc
_log_fn = log_fn
# 注册发布回调(在依赖注入完成后)
_queue_publisher.set_publish_callback(_queue_publish_callback)
_queue_publisher.set_log_callback(_log_fn or _log)
def _log(msg: str):
if _log_fn:
_log_fn(msg)
else:
logger.info("[queue] %s", msg)
# ==================================================
# 发布队列相关函数
# ==================================================
def generate_to_queue(topics_str, sd_url_val, sd_model_name, model, persona_text=None,
quality_mode_val=None, face_swap_on=False, count=1,
scheduled_time=None):
"""批量生成内容 → 加入发布队列(不直接发布)"""
try:
topics = [t.strip() for t in topics_str.split(",") if t.strip()] if topics_str else DEFAULT_TOPICS
use_weights = cfg.get("use_smart_weights", True) and _analytics.has_weights
api_key, base_url, _ = _get_llm_config()
if not api_key:
return "❌ LLM 未配置"
if not sd_url_val or not sd_model_name:
return "❌ SD WebUI 未连接或未选择模型"
count = max(1, min(int(count), 10))
results = []
for i in range(count):
try:
_log(f"📋 [队列生成] 正在生成第 {i+1}/{count} 篇...")
if use_weights:
topic = _analytics.get_weighted_topic(topics)
style = _analytics.get_weighted_style(DEFAULT_STYLES)
else:
topic = random.choice(topics)
style = random.choice(DEFAULT_STYLES)
svc = LLMService(api_key, base_url, model)
persona = _resolve_persona(persona_text) if persona_text else None
if use_weights:
weight_insights = f"高权重主题: {', '.join(list(analytics._weights.get('topic_weights', {}).keys())[:5])}\n"
weight_insights += f"权重摘要: {analytics.weights_summary}"
title_advice = _analytics.get_title_advice()
hot_tags = ", ".join(analytics.get_top_tags(8))
try:
data = svc.generate_weighted_copy(topic, style, weight_insights, title_advice, hot_tags, sd_model_name=sd_model_name, persona=persona)
except Exception:
data = svc.generate_copy(topic, style, sd_model_name=sd_model_name, persona=persona)
else:
data = svc.generate_copy(topic, style, sd_model_name=sd_model_name, persona=persona)
title = (data.get("title", "") or "")[:20]
content = data.get("content", "")
sd_prompt = data.get("sd_prompt", "")
tags = data.get("tags", [])
if use_weights:
top_tags = _analytics.get_top_tags(5)
for t in top_tags:
if t not in tags:
tags.append(t)
tags = tags[:10]
if not title:
_log(f"⚠️ 第 {i+1} 篇文案生成失败,跳过")
continue
# 生成图片
sd_svc = SDService(sd_url_val)
face_image = None
if face_swap_on:
face_image = SDService.load_face_image()
images = sd_svc.txt2img(prompt=sd_prompt, model=sd_model_name,
face_image=face_image,
quality_mode=quality_mode_val or "快速 (约30秒)",
persona=persona)
if not images:
_log(f"⚠️ 第 {i+1} 篇图片生成失败,跳过")
continue
# 保存备份
ts = int(time.time())
safe_title = re.sub(r'[\\/*?:"<>|]', "", title)[:20]
backup_dir = os.path.join(OUTPUT_DIR, f"{ts}_{safe_title}")
os.makedirs(backup_dir, exist_ok=True)
with open(os.path.join(backup_dir, "文案.txt"), "w", encoding="utf-8") as f:
f.write(f"标题: {title}\n风格: {style}\n主题: {topic}\n\n{content}\n\n标签: {', '.join(tags)}\n\nSD Prompt: {sd_prompt}")
image_paths = []
for idx, img in enumerate(images):
if isinstance(img, Image.Image):
path = os.path.abspath(os.path.join(backup_dir, f"{idx+1}.jpg"))
if img.mode != "RGB":
img = img.convert("RGB")
img.save(path, format="JPEG", quality=95)
image_paths.append(path)
if not image_paths:
continue
# 加入队列
item_id = _pub_queue.add(
title=title, content=content, sd_prompt=sd_prompt,
tags=tags, image_paths=image_paths, backup_dir=backup_dir,
topic=topic, style=style, persona=persona or "",
status=STATUS_DRAFT, scheduled_time=scheduled_time,
)
results.append(f"#{item_id} {title}")
_log(f"📋 已加入队列 #{item_id}: {title}")
# 多篇间隔
if i < count - 1:
time.sleep(2)
except Exception as e:
_log(f"⚠️ 第 {i+1} 篇生成异常: {e}")
continue
if not results:
return "❌ 所有内容生成失败,请检查配置"
return f"✅ 已生成 {len(results)} 篇内容加入队列:\n" + "\n".join(f" - {r}" for r in results)
except Exception as e:
return f"❌ 批量生成异常: {e}"
def _queue_publish_callback(item: dict) -> tuple[bool, str]:
"""队列发布回调: 从队列项数据发布到小红书"""
try:
mcp_url = cfg.get("mcp_url", "http://localhost:18060/mcp")
client = get_mcp_client(mcp_url)
title = item.get("title", "")
content = item.get("content", "")
image_paths = item.get("image_paths", [])
tags = item.get("tags", [])
if not title or not image_paths:
return False, "标题或图片缺失"
# 验证图片文件存在
valid_paths = [p for p in image_paths if os.path.isfile(p)]
if not valid_paths:
return False, "所有图片文件不存在"
result = client.publish_content(
title=title, content=content, images=valid_paths, tags=tags,
)
if "error" in result:
return False, result["error"]
_increment_stat("publishes")
_clear_error_streak()
return True, result.get("text", "发布成功")
except Exception as e:
return False, str(e)
def queue_format_table():
"""返回当前队列的完整表格(不过滤)"""
return _pub_queue.format_queue_table() if _pub_queue else ""
def queue_format_calendar():
"""返回未来14天的日历视图"""
return _pub_queue.format_calendar(14) if _pub_queue else ""
def queue_refresh_table(status_filter):
"""刷新队列表格"""
statuses = None
if status_filter and status_filter != "全部":
status_map = {v: k for k, v in STATUS_LABELS.items()}
if status_filter in status_map:
statuses = [status_map[status_filter]]
return _pub_queue.format_queue_table(statuses)
def queue_refresh_calendar():
"""刷新日历视图"""
return _pub_queue.format_calendar(14)
def queue_preview_item(item_id_str):
"""预览队列项"""
try:
item_id = int(str(item_id_str).strip().replace("#", ""))
return _pub_queue.format_preview(item_id)
except (ValueError, TypeError):
return "❌ 请输入有效的队列项 ID数字"
def queue_approve_item(item_id_str, scheduled_time_str):
"""审核通过"""
try:
item_id = int(str(item_id_str).strip().replace("#", ""))
sched = scheduled_time_str.strip() if scheduled_time_str else None
ok = _pub_queue.approve(item_id, scheduled_time=sched)
if ok:
status = "已排期" if sched else "待发布"
return f"✅ #{item_id} 已审核通过 → {status}"
return f"❌ #{item_id} 无法审核(可能不是草稿/失败状态)"
except (ValueError, TypeError):
return "❌ 请输入有效的 ID"
def queue_reject_item(item_id_str):
"""拒绝队列项"""
try:
item_id = int(str(item_id_str).strip().replace("#", ""))
ok = _pub_queue.reject(item_id)
return f"✅ #{item_id} 已拒绝" if ok else f"❌ #{item_id} 无法拒绝"
except (ValueError, TypeError):
return "❌ 请输入有效的 ID"
def queue_delete_item(item_id_str):
"""删除队列项"""
try:
item_id = int(str(item_id_str).strip().replace("#", ""))
ok = _pub_queue.delete(item_id)
return f"✅ #{item_id} 已删除" if ok else f"❌ #{item_id} 无法删除(可能正在发布中)"
except (ValueError, TypeError):
return "❌ 请输入有效的 ID"
def queue_retry_item(item_id_str):
"""重试失败项"""
try:
item_id = int(str(item_id_str).strip().replace("#", ""))
ok = _pub_queue.retry(item_id)
return f"✅ #{item_id} 已重新加入待发布" if ok else f"❌ #{item_id} 无法重试(不是失败状态)"
except (ValueError, TypeError):
return "❌ 请输入有效的 ID"
def queue_publish_now(item_id_str):
"""立即发布队列项"""
try:
item_id = int(str(item_id_str).strip().replace("#", ""))
return _queue_publisher.publish_now(item_id)
except (ValueError, TypeError):
return "❌ 请输入有效的 ID"
def queue_start_processor():
"""启动队列后台处理器"""
if _queue_publisher.is_running:
return "⚠️ 队列处理器已在运行中"
_queue_publisher.start(check_interval=60)
return "✅ 队列处理器已启动,每分钟检查待发布项"
def queue_stop_processor():
"""停止队列后台处理器"""
if not _queue_publisher.is_running:
return "⚠️ 队列处理器未在运行"
_queue_publisher.stop()
return "🛑 队列处理器已停止"
def queue_get_status():
"""获取队列状态摘要"""
counts = _pub_queue.count_by_status()
running = "🟢 运行中" if _queue_publisher.is_running else "⚪ 未启动"
parts = [f"**队列处理器**: {running}"]
for s, label in STATUS_LABELS.items():
cnt = counts.get(s, 0)
if cnt > 0:
parts.append(f"{label}: {cnt}")
total = sum(counts.values())
parts.append(f"**合计**: {total}")
return " · ".join(parts)
def queue_batch_approve(status_filter):
"""批量审核通过所有草稿"""
items = _pub_queue.list_by_status([STATUS_DRAFT])
if not items:
return "📭 没有待审核的草稿"
approved = 0
for item in items:
if _pub_queue.approve(item["id"]):
approved += 1
return f"✅ 已批量审核通过 {approved}"
def queue_generate_and_refresh(topics_str, sd_url_val, sd_model_name, model,
persona_text, quality_mode_val, face_swap_on,
gen_count, gen_schedule_time):
"""生成内容到队列 + 刷新表格"""
msg = generate_to_queue(
topics_str, sd_url_val, sd_model_name, model,
persona_text=persona_text, quality_mode_val=quality_mode_val,
face_swap_on=face_swap_on, count=gen_count,
scheduled_time=gen_schedule_time.strip() if gen_schedule_time else None,
)
table = _pub_queue.format_queue_table()
calendar = _pub_queue.format_calendar(14)
status = queue_get_status()
return msg, table, calendar, status
# 调度器下次执行时间追踪

110
services/rate_limiter.py Normal file
View File

@ -0,0 +1,110 @@
"""
services/rate_limiter.py
频率控制每日限额冷却管理
"""
import time
import threading
from datetime import datetime
# ---- 操作记录:防重复 & 每日统计 ----
_op_history = {
"commented_feeds": set(), # 已评论的 feed_id
"replied_comments": set(), # 已回复的 comment_id
"liked_feeds": set(), # 已点赞的 feed_id
"favorited_feeds": set(), # 已收藏的 feed_id
}
_daily_stats = {
"date": "",
"comments": 0,
"likes": 0,
"favorites": 0,
"publishes": 0,
"replies": 0,
"errors": 0,
}
# 每日操作上限
DAILY_LIMITS = {
"comments": 30,
"likes": 80,
"favorites": 50,
"publishes": 8,
"replies": 40,
}
# 连续错误计数 → 冷却
_consecutive_errors = 0
_error_cooldown_until = 0.0
# 线程锁,保护 stats/history 并发写入
_stats_lock = threading.Lock()
def _reset_daily_stats_if_needed():
"""每天自动重置统计"""
today = datetime.now().strftime("%Y-%m-%d")
if _daily_stats["date"] != today:
_daily_stats.update({
"date": today, "comments": 0, "likes": 0,
"favorites": 0, "publishes": 0, "replies": 0, "errors": 0,
})
# 每日重置历史记录(允许隔天重复互动)
for k in _op_history:
_op_history[k].clear()
def _check_daily_limit(op_type: str) -> bool:
"""检查是否超出每日限额"""
_reset_daily_stats_if_needed()
limit = DAILY_LIMITS.get(op_type, 999)
current = _daily_stats.get(op_type, 0)
return current < limit
def _increment_stat(op_type: str):
"""增加操作计数"""
_reset_daily_stats_if_needed()
_daily_stats[op_type] = _daily_stats.get(op_type, 0) + 1
def _record_error(log_fn=None):
"""记录错误连续错误触发冷却。log_fn 可选,用于写入日志。"""
global _consecutive_errors, _error_cooldown_until
_consecutive_errors += 1
_daily_stats["errors"] = _daily_stats.get("errors", 0) + 1
if _consecutive_errors >= 3:
cooldown = min(60 * _consecutive_errors, 600) # 最多冷却10分钟
_error_cooldown_until = time.time() + cooldown
if log_fn:
log_fn(f"⚠️ 连续 {_consecutive_errors} 次错误,冷却 {cooldown}s")
def _clear_error_streak():
"""操作成功后清除连续错误记录"""
global _consecutive_errors
_consecutive_errors = 0
def _is_in_cooldown() -> bool:
"""检查是否在错误冷却期"""
return time.time() < _error_cooldown_until
def _is_in_operating_hours(start_hour: int = 7, end_hour: int = 23) -> bool:
"""检查是否在运营时间段"""
now_hour = datetime.now().hour
return start_hour <= now_hour < end_hour
def _get_stats_summary() -> str:
"""获取今日运营统计摘要"""
_reset_daily_stats_if_needed()
s = _daily_stats
lines = [
f"📊 **今日运营统计** ({s['date']})",
f"- 💬 评论: {s['comments']}/{DAILY_LIMITS['comments']}",
f"- ❤️ 点赞: {s['likes']}/{DAILY_LIMITS['likes']}",
f"- ⭐ 收藏: {s['favorites']}/{DAILY_LIMITS['favorites']}",
f"- 🚀 发布: {s['publishes']}/{DAILY_LIMITS['publishes']}",
f"- 💌 回复: {s['replies']}/{DAILY_LIMITS['replies']}",
f"- ❌ 错误: {s['errors']}",
]
return "\n".join(lines)

1086
services/scheduler.py Normal file

File diff suppressed because it is too large Load Diff

1368
ui/app.py Normal file

File diff suppressed because it is too large Load Diff