gpt 5.5 xhigh 的上下文怎么感觉消耗的好快

2026-04-29 10:412阅读0评论SEO资讯
  • 内容介绍
  • 文章标签
  • 相关推荐
问题描述:

gpt 5.5 xhigh 我就要它做了一轮的代码修复,过一会就把 1M 上下文用光了,然后自动压缩了,虽然装了 superpowers,但是也不至于这么快呀。之前 gpt 5.4 也是同样的环境,也没看见上下文消耗这么快的

网友解答:
--【壹】--:

image458×215 15.2 KB
model_context_window = 1000000
model_auto_compact_token_limit = 900000
在.codex 的配置文件中加入这两行就可以了,但是 5.5目前是不支持的,我之前 5.4 的时候配置的,1M上下文,消耗 token 量也会是 2 倍


--【贰】--:
github.com/openai/codex

config.toml context window settings are not respected

已打开 07:11PM - 23 Apr 26 UTC kkellyoffical bug context config

### What happened? The context-related settings in `config.toml` do not appear …to take effect. I tried configuring Codex from: ```text C:\Users\12768\.codex\config.toml ``` Relevant settings: ```toml model = "gpt-5.5" model_reasoning_effort = "high" model_context_window = 960000 model_auto_compact_token_limit = 800000 ``` After setting a large context window, for example around `1M`, Codex still automatically reports/uses about `258k` instead. This makes it impossible to control the effective context window from the config file. ### Expected behavior `model_context_window` and `model_auto_compact_token_limit` should be honored, or Codex should report a clear validation/error message if the configured value exceeds the supported model/client limit. ### Actual behavior The configured value is silently reduced/ignored. For example, setting approximately `1M` results in Codex automatically changing/using about `258k`. ### Environment - OS: Windows - Config path: `C:\Users\12768\.codex\config.toml` - Model configured: `gpt-5.5` ### Why this matters Users cannot reliably control context behavior through `config.toml`, and the silent conversion makes it difficult to understand whether the setting is unsupported, capped by the model, capped by the client, or being parsed incorrectly.

去查了一下感觉这个人说的挺有道理的


--【叁】--:

好像是因为5.5的codex没有1m上下文吧,只有400k


--【肆】--:

后面应该会放出来 1M 上下文吧,不然这个上下文压缩的太快了


--【伍】--:

那个 1M 的上下文配置对这个模型不生效吗


--【陆】--:

好像只有api有1m上下文,codex里就给了400k,减去128k输入,好像272k就压缩了


--【柒】--:


以前我是设置过这个的,然后重启codex、重启电脑啥的都试过,cli和app都还是雷打不动258k,很崩溃。不过我用的是xhigh,虽然感觉应该不相干就是了


--【捌】--:

可能用不起啊,gpt5.4的大于272K就价格就翻倍了,现在gpt5.5还涨价一倍


--【玖】--:

不应该呀,我之前 5.4 一直都是生肖的呀


--【拾】--:

大佬 你是咋配置的 1M上下文 我codex app cli 都在toml里做了上下文的设置但是根本不管用 /status一看还是256k 悲