Piece of free advice towards a better civilisation: people who didn't even read the comment they're replying to shouldn't be rewarded for their laziness.
I read his comment and still replied. I think his claim that nobody reads thinking blocks and that thinking blocks increase latency is nonsense. I am not going to figure out which settings I need to enable because after reading this thread I cancelled my subscription and switched over to Codex. Because I had the exact same experience as many in this thread.
Also what is that "PR advice"—he might as well wear a suit. This is absolutely a nerd fight.
I tested because I was porting memories from Claude Code to Codex, so I might as well test. I obviously still have subscription days remaining.
There is another comment in this thread linking a GitHub issue that discusses this. The GitHub issue this whole HN submission is about even says that Anthropic hides thinking blocks.
I didn't use commands. I only used rules, memories, and skills. I asked Codex to read rules and memories from where Claude Code stores them on the filesystem and merge them into `AGENTS.md` and this actually works better because Anthropic prompts Claude Code to write each memory to a separate file, so you end up having a main MEMORY.md that acts as a kind of directory that lists each individual memory with its file name and brief description, hoping that Claude Code will read them, but the problem is that Claude Code never does. This is the same problem[0] that Vercel had with skills I believe. Skills are easy to port because they appear to use the same format, so you can just do `mv ~/.claude/skills ~/.codex/skills` (or `.agents/skills`).
What I was pointing out in my comment about the PR advice is that someone responding from a corporation to customers should be providing information to help the customer, nothing more.
Customers may want to fight - you seem to be providing an example - but representatives shouldn't take the bait.
I also have similar experience with their API, i.e. some requests get stalled for minutes with zero events coming in from Anthropic. Presumably the model does this "extended thinking" but no way to see that. I treat these requests as stuck and retry. Same experience in Claude Code Opus 4.6 when effort is set to "high"—the model gets stuck for ten minutes (at which point I cancel) and token count indicator doesn't increase.
I am not buying what this guy says. He is either lying or not telling us everything.
Wrote my own harness with introspection/long form thinking as a tool that the model can use to plan. Works really well with opus. I can’t use Claude code sadly, it sits there ticking for minutes seemingly doing absolutely nothing although I know it’s working. I hate that as an experience and built my harness with the philosophy of always having something streaming on the ui.
Btw the system prompt length in CC is getting to be insane.
See the docs: https://code.claude.com/docs/en/settings#available-settings