天塌了,突然发现新版OpenClaw小龙虾不触发缓存了,发个解决办法!
- 内容介绍
- 文章标签
- 相关推荐
众所周知,openclaw是token吞金兽。虽然现在gpt调用几乎是免费的,但是可能也有些佬友使用其他api渠道或者没有自己的号池在使用各家中转站。如果没办法正确的调通缓存触发,会让token的使用成本超超超级加倍!关乎钱包的问题必须马上折腾明白。
今天更新了3.23-1和3.23-2版本之后,发现缓存触发果然出问题了,解决思路如下:
修复前:
1674×661 66.6 KB
修复后:
1662×634 61.6 KB
适用场景:
- 用的是
openai-responses格式的接口 - 链路是
OpenClaw -> CPA/new-api/其他 OpenAI-compatible proxy -> 上游 OpenAI/Codex - 升级到
2026.3.22+ / 2026.3.23+后,调用日志中发现缓存不触发
一、根因
OpenClaw 在 2026-03-19 合入了这条逻辑:
- PR #49877
- 标题:
fix: strip prompt_cache_key for non-OpenAI openai-responses endpoints
fix: strip prompt_cache_key for non-OpenAI openai-responses endpoints (#49877)
main ← ShaunTsai:fix/strip-prompt-cache-non-openai
已打开 03:23PM - 18 Mar 26 UTC
ShaunTsai
+100
-1
## Summary - Problem: Volcano Engine DeepSeek (and other non-OpenAI providers u…sing the `openai-responses` API) returns HTTP 400 `unknown field "prompt_cache_key"` because pi-ai unconditionally injects `prompt_cache_key` and `prompt_cache_retention` into OpenAI Responses request bodies. - Why it matters: users configuring Volcano Engine models cannot use them at all — every request fails with a 400. - What changed: added `prompt_cache_key`/`prompt_cache_retention` stripping to the existing `createOpenAIResponsesContextManagementWrapper` in `openai-stream-wrappers.ts`, using the existing `isDirectOpenAIBaseUrl()` check to determine whether the endpoint actually supports these fields. - What did NOT change (scope boundary): prompt caching behavior for direct OpenAI, Azure OpenAI, and GitHub Copilot endpoints (they pass the `isDirectOpenAIBaseUrl` check). Anthropic caching uses a different mechanism (`cacheRetention` option) and is unaffected. The existing `createBedrockNoCacheWrapper` for non-Anthropic Bedrock models is also unchanged. No changes to `extra-params.ts`. ## Change Type (select all) - [x] Bug fix ## Scope (select all touched areas) - [x] Gateway / orchestration ## Linked Issue/PR - Fixes #48155 - Supersedes #49727 (closed — that approach used a separate provider allowlist that duplicated knowledge already in `openai-stream-wrappers.ts` and was missing `azure-openai`) ## User-visible / Behavior Changes Volcano Engine DeepSeek (and other non-OpenAI providers using `openai-responses` API) will no longer fail with HTTP 400 on unknown `prompt_cache_key` field. ## Security Impact (required) - New permissions/capabilities? (Yes/No) No - Secrets/tokens handling changed? (Yes/No) No - New/changed network calls? (Yes/No) No - Command/tool execution surface changed? (Yes/No) No - Data access scope changed? (Yes/No) No ## Repro + Verification ### Environment - OS: macOS - Runtime/container: Node 22+ - Model/provider: volces/deepseek-v3-2-251201 ### Steps 1. Configure a `volces` provider with `deepseek-v3-2-251201` model 2. Send a message to the agent ### Expected - Agent responds normally ### Actual - HTTP 400: `unknown field "prompt_cache_key"` ## Evidence - [x] Code inspection: traced `prompt_cache_key` injection from pi-ai `openai-responses` stream through to the HTTP request body; confirmed the field is only meaningful for OpenAI's own API - [x] Verified the `isDirectOpenAIBaseUrl()` check correctly identifies OpenAI (`api.openai.com`), ChatGPT (`chatgpt.com`), and Azure OpenAI (`*.openai.azure.com`) endpoints — all other baseUrls (including Volcano Engine) return false ## Human Verification (required) - Verified scenarios: traced the full code path from `applyExtraParamsToAgent` → `createOpenAIResponsesContextManagementWrapper` → `applyOpenAIResponsesPayloadOverrides`; confirmed `stripPromptCache` is true for non-direct-OpenAI endpoints and false for direct OpenAI - Edge cases checked: `azure-openai` provider with `*.openai.azure.com` baseUrl passes `isDirectOpenAIBaseUrl` and keeps prompt cache fields; providers with no baseUrl (empty string) get fields stripped (safe — no-op since pi-ai only injects for openai-responses) - What I did **not** verify: live Volcano Engine API call (no credentials available); full `pnpm check` currently has pre-existing typing debt failures on `main` unrelated to this PR ## AI Disclosure - [x] AI-assisted (Kiro CLI) - [x] I understand what the code does ## Review Conversations - [ ] I replied to or resolved every bot review conversation I addressed in this PR. - [ ] I left unresolved only the conversations that still need reviewer or maintainer judgment. ## Compatibility / Migration - Backward compatible? (Yes/No) Yes - Config/env changes? (Yes/No) No - Migration needed? (Yes/No) No ## Failure Recovery (if this breaks) - How to disable/revert: revert this commit - Files/config to restore: `src/agents/pi-embedded-runner/openai-stream-wrappers.ts` - Known bad symptoms: if `isDirectOpenAIBaseUrl` fails to recognize a legitimate OpenAI endpoint, prompt caching would silently stop working for that endpoint ## Risks and Mitigations - Risk: a new OpenAI-compatible endpoint hostname (not `api.openai.com`, `chatgpt.com`, or `*.openai.azure.com`) would need `isDirectOpenAIBaseUrl` updated. - Mitigation: this is the same function already used for `store` field decisions — any such endpoint would already be broken for `store: true` forcing, so the fix would naturally cover both.
也就是:
- 如果
baseUrl不是官方 OpenAI / Azure OpenAI - OpenClaw 会把
prompt_cache_key和prompt_cache_retention从请求里去掉
这能避免某些第三方兼容接口报错,但会误伤支持 prompt cache 的接口
例如:
- new-api
- CLIProxyAPI
二、具体改哪个文件
1) 优先找实际安装目录
如果是全局安装(npm -g / pnpm -g),通常 OpenClaw 会在类似下面的位置:
/usr/lib/node_modules/openclaw/dist/- 或者:
$(npm root -g)/openclaw/dist/
2) 目标文件不是固定文件名,但目标函数名固定
需要找包含这个函数的 dist bundle:
grep -RIl 'function shouldStripResponsesPromptCache(model)' /usr/lib/node_modules/openclaw/dist
如果是 npm 全局安装但不在 /usr/lib/node_modules,可以用:
grep -RIl 'function shouldStripResponsesPromptCache(model)' "$(npm root -g)/openclaw/dist"
3) 我这台机器上找到的文件是
/usr/lib/node_modules/openclaw/dist/pi-embedded-CbCYZxIb.js
注意:编译打包后的js文件名可能不一样,以 grep 找到的实际文件为准。
三、具体怎么改
原始代码
找到这个函数:
function shouldStripResponsesPromptCache(model) {
if (typeof model.api !== "string" || !OPENAI_RESPONSES_APIS.has(model.api)) return false;
if (typeof model.baseUrl !== "string" || !model.baseUrl.trim()) return false;
return !isDirectOpenAIBaseUrl(model.baseUrl);
}
如果你要放行自己的代理,比如:
127.0.0.1:13000localhost:13000my-proxy.example.com
可以改成:
function shouldStripResponsesPromptCache(model) {
if (typeof model.api !== "string" || !OPENAI_RESPONSES_APIS.has(model.api)) return false;
if (typeof model.baseUrl !== "string" || !model.baseUrl.trim()) return false;
if (
model.baseUrl.includes("127.0.0.1:13000") ||
model.baseUrl.includes("localhost:13000") ||
model.baseUrl.includes("my-proxy.example.com")
) return false;
return !isDirectOpenAIBaseUrl(model.baseUrl);
}
我给本机改成下面这样了
function shouldStripResponsesPromptCache(model) {
if (typeof model.api !== "string" || !OPENAI_RESPONSES_APIS.has(model.api)) return false;
if (typeof model.baseUrl !== "string" || !model.baseUrl.trim()) return false;
if (model.baseUrl.includes("127.0.0.1:13000") || model.baseUrl.includes("localhost:13000")) return false;
return !isDirectOpenAIBaseUrl(model.baseUrl);
}
可以把下面这段直接发给自己的 OpenClaw,让它自己尝试定位并修改:
你帮我修一个 OpenClaw 的本地缓存兼容问题。
目标:
- 我现在的 OpenClaw 升级后,会把发往 OpenAI-compatible proxy 的 `prompt_cache_key` 剥掉,导致 prompt cache 基本不触发。
- 请你在本机全局安装的 OpenClaw dist 文件里,找到 `shouldStripResponsesPromptCache(model)` 这个函数。
- 对我指定的代理 baseUrl / host 做白名单放行,让这些代理**不要再被 strip `prompt_cache_key`**。
请按以下步骤做:
1. 先定位 OpenClaw 的实际 dist 目录:
- 优先看 `/usr/lib/node_modules/openclaw/dist`
- 如果没有,再看 `$(npm root -g)/openclaw/dist`
2. 在 dist 目录中搜索:
- `function shouldStripResponsesPromptCache(model)`
3. 找到包含该函数的真实 bundle 文件(不要改 `.bak` 备份文件)。
4. 先备份该文件。
5. 把这个函数:
function shouldStripResponsesPromptCache(model) {
if (typeof model.api !== "string" || !OPENAI_RESPONSES_APIS.has(model.api)) return false;
if (typeof model.baseUrl !== "string" || !model.baseUrl.trim()) return false;
return !isDirectOpenAIBaseUrl(model.baseUrl);
}
改成带白名单放行的版本。
我希望放行这些代理(你按实际变量生成 OR 条件):
- 127.0.0.1:13000
- localhost:13000
- 以及我 OpenClaw 配置里 `models.providers.*` 中所有 `api = openai-responses` 且 `baseUrl` 不是官方 OpenAI/Azure OpenAI 的 provider baseUrl
也就是说:
- 先从 `~/.openclaw/openclaw.json` 读取 provider 配置
- 自动把符合条件的 baseUrl 也加进白名单
6. 修改完成后:
- 重启 gateway:`openclaw gateway restart`
- 跑 `openclaw health`
7. 最后告诉我:
- 你改了哪个文件
- 你放行了哪些 baseUrl/host
- 是否重启成功
- health 是否正常
注意事项:
- 只做最小修改,不要顺手改别的逻辑
- 目标是“对白名单代理保留 `prompt_cache_key`”,不是全局关闭 strip 逻辑
- 如果 bundle 文件名和当前版本不同,以实际 grep 结果为准
- 如果找不到函数,就把搜索结果告诉我,不要瞎改
更理想的上游方案
OpenClaw Github项目中已经有人提 PR:
- #52017
fix(agents): add supportsPromptCache compat opt-in for third-party OpenAI proxies
fix(agents): add supportsPromptCache compat opt-in for third-party OpenAI proxies (#52017)
main ← lanyasheng:fix/support-prompt-cache-opt-in-for-proxies
已打开 02:28AM - 22 Mar 26 UTC
lanyasheng
+52
-0
## Summary PR #49877 introduced `shouldStripResponsesPromptCache()` to strip `p…rompt_cache_key` and `prompt_cache_retention` from Responses API payloads when the `baseUrl` is not a direct OpenAI or Azure endpoint. This correctly prevents 400 errors from endpoints that do not support these fields. However, many users run OpenAI-compatible proxies (rate-limit aggregators, cost-tracking proxies, regional relay endpoints) that forward requests to real OpenAI APIs and fully support prompt caching. For these setups, the cache parameters are silently stripped, resulting in 0% cache hit rates and significantly higher costs. ### Changes - Add `supportsPromptCache` to `ModelCompatConfig` type and Zod schema - Check `compat.supportsPromptCache === true` in `shouldStripResponsesPromptCache()` before stripping - Add 2 test cases verifying the opt-in behavior (keeps fields when true, strips when false) ### Configuration Example ```json { "models": { "providers": { "my-proxy": { "baseUrl": "https://my-proxy.example.com/v1", "api": "openai-responses", "models": [{ "id": "gpt-5.4", "compat": { "supportsPromptCache": true } }] } } } } ``` Refs #48155
如果这个 PR 后面合并了,未来就不需要手打这种 dist 补丁了。
网友解答:--【壹】--:
前排支持佬~
感谢佬友分享
--【贰】--:
支持,感谢佬的分享
--【叁】--:
难怪我说,token用得快了那么多
--【肆】--:
感谢分享
--【伍】--:
prompt cache和kv cache,openclaw里还有这个,学习
众所周知,openclaw是token吞金兽。虽然现在gpt调用几乎是免费的,但是可能也有些佬友使用其他api渠道或者没有自己的号池在使用各家中转站。如果没办法正确的调通缓存触发,会让token的使用成本超超超级加倍!关乎钱包的问题必须马上折腾明白。
今天更新了3.23-1和3.23-2版本之后,发现缓存触发果然出问题了,解决思路如下:
修复前:
1674×661 66.6 KB
修复后:
1662×634 61.6 KB
适用场景:
- 用的是
openai-responses格式的接口 - 链路是
OpenClaw -> CPA/new-api/其他 OpenAI-compatible proxy -> 上游 OpenAI/Codex - 升级到
2026.3.22+ / 2026.3.23+后,调用日志中发现缓存不触发
一、根因
OpenClaw 在 2026-03-19 合入了这条逻辑:
- PR #49877
- 标题:
fix: strip prompt_cache_key for non-OpenAI openai-responses endpoints
fix: strip prompt_cache_key for non-OpenAI openai-responses endpoints (#49877)
main ← ShaunTsai:fix/strip-prompt-cache-non-openai
已打开 03:23PM - 18 Mar 26 UTC
ShaunTsai
+100
-1
## Summary - Problem: Volcano Engine DeepSeek (and other non-OpenAI providers u…sing the `openai-responses` API) returns HTTP 400 `unknown field "prompt_cache_key"` because pi-ai unconditionally injects `prompt_cache_key` and `prompt_cache_retention` into OpenAI Responses request bodies. - Why it matters: users configuring Volcano Engine models cannot use them at all — every request fails with a 400. - What changed: added `prompt_cache_key`/`prompt_cache_retention` stripping to the existing `createOpenAIResponsesContextManagementWrapper` in `openai-stream-wrappers.ts`, using the existing `isDirectOpenAIBaseUrl()` check to determine whether the endpoint actually supports these fields. - What did NOT change (scope boundary): prompt caching behavior for direct OpenAI, Azure OpenAI, and GitHub Copilot endpoints (they pass the `isDirectOpenAIBaseUrl` check). Anthropic caching uses a different mechanism (`cacheRetention` option) and is unaffected. The existing `createBedrockNoCacheWrapper` for non-Anthropic Bedrock models is also unchanged. No changes to `extra-params.ts`. ## Change Type (select all) - [x] Bug fix ## Scope (select all touched areas) - [x] Gateway / orchestration ## Linked Issue/PR - Fixes #48155 - Supersedes #49727 (closed — that approach used a separate provider allowlist that duplicated knowledge already in `openai-stream-wrappers.ts` and was missing `azure-openai`) ## User-visible / Behavior Changes Volcano Engine DeepSeek (and other non-OpenAI providers using `openai-responses` API) will no longer fail with HTTP 400 on unknown `prompt_cache_key` field. ## Security Impact (required) - New permissions/capabilities? (Yes/No) No - Secrets/tokens handling changed? (Yes/No) No - New/changed network calls? (Yes/No) No - Command/tool execution surface changed? (Yes/No) No - Data access scope changed? (Yes/No) No ## Repro + Verification ### Environment - OS: macOS - Runtime/container: Node 22+ - Model/provider: volces/deepseek-v3-2-251201 ### Steps 1. Configure a `volces` provider with `deepseek-v3-2-251201` model 2. Send a message to the agent ### Expected - Agent responds normally ### Actual - HTTP 400: `unknown field "prompt_cache_key"` ## Evidence - [x] Code inspection: traced `prompt_cache_key` injection from pi-ai `openai-responses` stream through to the HTTP request body; confirmed the field is only meaningful for OpenAI's own API - [x] Verified the `isDirectOpenAIBaseUrl()` check correctly identifies OpenAI (`api.openai.com`), ChatGPT (`chatgpt.com`), and Azure OpenAI (`*.openai.azure.com`) endpoints — all other baseUrls (including Volcano Engine) return false ## Human Verification (required) - Verified scenarios: traced the full code path from `applyExtraParamsToAgent` → `createOpenAIResponsesContextManagementWrapper` → `applyOpenAIResponsesPayloadOverrides`; confirmed `stripPromptCache` is true for non-direct-OpenAI endpoints and false for direct OpenAI - Edge cases checked: `azure-openai` provider with `*.openai.azure.com` baseUrl passes `isDirectOpenAIBaseUrl` and keeps prompt cache fields; providers with no baseUrl (empty string) get fields stripped (safe — no-op since pi-ai only injects for openai-responses) - What I did **not** verify: live Volcano Engine API call (no credentials available); full `pnpm check` currently has pre-existing typing debt failures on `main` unrelated to this PR ## AI Disclosure - [x] AI-assisted (Kiro CLI) - [x] I understand what the code does ## Review Conversations - [ ] I replied to or resolved every bot review conversation I addressed in this PR. - [ ] I left unresolved only the conversations that still need reviewer or maintainer judgment. ## Compatibility / Migration - Backward compatible? (Yes/No) Yes - Config/env changes? (Yes/No) No - Migration needed? (Yes/No) No ## Failure Recovery (if this breaks) - How to disable/revert: revert this commit - Files/config to restore: `src/agents/pi-embedded-runner/openai-stream-wrappers.ts` - Known bad symptoms: if `isDirectOpenAIBaseUrl` fails to recognize a legitimate OpenAI endpoint, prompt caching would silently stop working for that endpoint ## Risks and Mitigations - Risk: a new OpenAI-compatible endpoint hostname (not `api.openai.com`, `chatgpt.com`, or `*.openai.azure.com`) would need `isDirectOpenAIBaseUrl` updated. - Mitigation: this is the same function already used for `store` field decisions — any such endpoint would already be broken for `store: true` forcing, so the fix would naturally cover both.
也就是:
- 如果
baseUrl不是官方 OpenAI / Azure OpenAI - OpenClaw 会把
prompt_cache_key和prompt_cache_retention从请求里去掉
这能避免某些第三方兼容接口报错,但会误伤支持 prompt cache 的接口
例如:
- new-api
- CLIProxyAPI
二、具体改哪个文件
1) 优先找实际安装目录
如果是全局安装(npm -g / pnpm -g),通常 OpenClaw 会在类似下面的位置:
/usr/lib/node_modules/openclaw/dist/- 或者:
$(npm root -g)/openclaw/dist/
2) 目标文件不是固定文件名,但目标函数名固定
需要找包含这个函数的 dist bundle:
grep -RIl 'function shouldStripResponsesPromptCache(model)' /usr/lib/node_modules/openclaw/dist
如果是 npm 全局安装但不在 /usr/lib/node_modules,可以用:
grep -RIl 'function shouldStripResponsesPromptCache(model)' "$(npm root -g)/openclaw/dist"
3) 我这台机器上找到的文件是
/usr/lib/node_modules/openclaw/dist/pi-embedded-CbCYZxIb.js
注意:编译打包后的js文件名可能不一样,以 grep 找到的实际文件为准。
三、具体怎么改
原始代码
找到这个函数:
function shouldStripResponsesPromptCache(model) {
if (typeof model.api !== "string" || !OPENAI_RESPONSES_APIS.has(model.api)) return false;
if (typeof model.baseUrl !== "string" || !model.baseUrl.trim()) return false;
return !isDirectOpenAIBaseUrl(model.baseUrl);
}
如果你要放行自己的代理,比如:
127.0.0.1:13000localhost:13000my-proxy.example.com
可以改成:
function shouldStripResponsesPromptCache(model) {
if (typeof model.api !== "string" || !OPENAI_RESPONSES_APIS.has(model.api)) return false;
if (typeof model.baseUrl !== "string" || !model.baseUrl.trim()) return false;
if (
model.baseUrl.includes("127.0.0.1:13000") ||
model.baseUrl.includes("localhost:13000") ||
model.baseUrl.includes("my-proxy.example.com")
) return false;
return !isDirectOpenAIBaseUrl(model.baseUrl);
}
我给本机改成下面这样了
function shouldStripResponsesPromptCache(model) {
if (typeof model.api !== "string" || !OPENAI_RESPONSES_APIS.has(model.api)) return false;
if (typeof model.baseUrl !== "string" || !model.baseUrl.trim()) return false;
if (model.baseUrl.includes("127.0.0.1:13000") || model.baseUrl.includes("localhost:13000")) return false;
return !isDirectOpenAIBaseUrl(model.baseUrl);
}
可以把下面这段直接发给自己的 OpenClaw,让它自己尝试定位并修改:
你帮我修一个 OpenClaw 的本地缓存兼容问题。
目标:
- 我现在的 OpenClaw 升级后,会把发往 OpenAI-compatible proxy 的 `prompt_cache_key` 剥掉,导致 prompt cache 基本不触发。
- 请你在本机全局安装的 OpenClaw dist 文件里,找到 `shouldStripResponsesPromptCache(model)` 这个函数。
- 对我指定的代理 baseUrl / host 做白名单放行,让这些代理**不要再被 strip `prompt_cache_key`**。
请按以下步骤做:
1. 先定位 OpenClaw 的实际 dist 目录:
- 优先看 `/usr/lib/node_modules/openclaw/dist`
- 如果没有,再看 `$(npm root -g)/openclaw/dist`
2. 在 dist 目录中搜索:
- `function shouldStripResponsesPromptCache(model)`
3. 找到包含该函数的真实 bundle 文件(不要改 `.bak` 备份文件)。
4. 先备份该文件。
5. 把这个函数:
function shouldStripResponsesPromptCache(model) {
if (typeof model.api !== "string" || !OPENAI_RESPONSES_APIS.has(model.api)) return false;
if (typeof model.baseUrl !== "string" || !model.baseUrl.trim()) return false;
return !isDirectOpenAIBaseUrl(model.baseUrl);
}
改成带白名单放行的版本。
我希望放行这些代理(你按实际变量生成 OR 条件):
- 127.0.0.1:13000
- localhost:13000
- 以及我 OpenClaw 配置里 `models.providers.*` 中所有 `api = openai-responses` 且 `baseUrl` 不是官方 OpenAI/Azure OpenAI 的 provider baseUrl
也就是说:
- 先从 `~/.openclaw/openclaw.json` 读取 provider 配置
- 自动把符合条件的 baseUrl 也加进白名单
6. 修改完成后:
- 重启 gateway:`openclaw gateway restart`
- 跑 `openclaw health`
7. 最后告诉我:
- 你改了哪个文件
- 你放行了哪些 baseUrl/host
- 是否重启成功
- health 是否正常
注意事项:
- 只做最小修改,不要顺手改别的逻辑
- 目标是“对白名单代理保留 `prompt_cache_key`”,不是全局关闭 strip 逻辑
- 如果 bundle 文件名和当前版本不同,以实际 grep 结果为准
- 如果找不到函数,就把搜索结果告诉我,不要瞎改
更理想的上游方案
OpenClaw Github项目中已经有人提 PR:
- #52017
fix(agents): add supportsPromptCache compat opt-in for third-party OpenAI proxies
fix(agents): add supportsPromptCache compat opt-in for third-party OpenAI proxies (#52017)
main ← lanyasheng:fix/support-prompt-cache-opt-in-for-proxies
已打开 02:28AM - 22 Mar 26 UTC
lanyasheng
+52
-0
## Summary PR #49877 introduced `shouldStripResponsesPromptCache()` to strip `p…rompt_cache_key` and `prompt_cache_retention` from Responses API payloads when the `baseUrl` is not a direct OpenAI or Azure endpoint. This correctly prevents 400 errors from endpoints that do not support these fields. However, many users run OpenAI-compatible proxies (rate-limit aggregators, cost-tracking proxies, regional relay endpoints) that forward requests to real OpenAI APIs and fully support prompt caching. For these setups, the cache parameters are silently stripped, resulting in 0% cache hit rates and significantly higher costs. ### Changes - Add `supportsPromptCache` to `ModelCompatConfig` type and Zod schema - Check `compat.supportsPromptCache === true` in `shouldStripResponsesPromptCache()` before stripping - Add 2 test cases verifying the opt-in behavior (keeps fields when true, strips when false) ### Configuration Example ```json { "models": { "providers": { "my-proxy": { "baseUrl": "https://my-proxy.example.com/v1", "api": "openai-responses", "models": [{ "id": "gpt-5.4", "compat": { "supportsPromptCache": true } }] } } } } ``` Refs #48155
如果这个 PR 后面合并了,未来就不需要手打这种 dist 补丁了。
网友解答:--【壹】--:
前排支持佬~
感谢佬友分享
--【贰】--:
支持,感谢佬的分享
--【叁】--:
难怪我说,token用得快了那么多
--【肆】--:
感谢分享
--【伍】--:
prompt cache和kv cache,openclaw里还有这个,学习

