Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I tried deepseek v4 through open code at the weekend. I'm a daily Claude/Claude code user.

I tried to build something simple and while it got the job done the thinking displayed did not fill me with confidence. It was pages and pages of "actually no", "hang on", "wait that makes no sense". It was like the model was having a breakdown.

Bear in mind open code was also new to me so I could be just seeing thinking where I usually don't



> "actually no", "hang on", "wait that makes no sense"

Claude does the same thing, claude code just hides the thinking now


And before that they summarized it. But yeah, thinking was always like that (when it first started, it almost just seemed like a scheme to massively increase token use..)

I usually like the answers generated by those flows.

You can just use it through Claude Code, so you get to keep the system prompt and tooling you are used to.

3rd party models are a drop-in replacement with `ANTHROPIC_BASE_URL` in Claude Code, something people seem to miss right now. And contrary to what Anthropic might like to have you think, you don't need Opus 4.7 to run the harness to get similar performance.

https://api-docs.deepseek.com/quick_start/agent_integrations...


Is there an easier way to manage multiple models?

I just made a simple script that makes it easy to switch between models.

Before CC and Codex removed thinking/verbose and hid most of it, both do that .

Yeah people aren’t aware that we don’t see the actual traces anymore lol

Opus 4.6 and GPT 5.4 do the same thing through GH Copilot and Bedrock. I get plenty of "Actually the simplest solution is ..., wait no, actually I should do ..., the best fix is ..."

I feel the reasoning might be tuned for hard questions and not agentic work. I feel it overthinks, good for a very hard question, not for small incremental agentic steps. In theory, disabling thinking and using really well formed instruction, forcing it to still emit a bunch of tokens each step prior to taking action, could help. Only one way to find out though.


Using a bunch of CLIs to work with DeepSeek V4, I've found that Langcli is the best fit for DeepSeek V4. For programming tasks, the cache hit rate is above 95%. Not only can it seamlessly and dynamically switch between DeepSeek V4 Flash, V4 Pro, and other mainstream models within the same context, but it is also 100% compatible with Claude Code.

I previously encountered the "reasoning content missing" issue when using opencode + deepseek v4. I don't know if it has been fixed now.


> It tried to build something simple and while it got the job done the thinking displayed did not fill me with confidence. It was pages and pages of "actually no", "hang on", "wait that makes no sense". It was like the model was having a breakdown.

It has been probanly trained to assess its own "thoughts" regularly and outputs those for the assesment results. I wouldn't worry much about the reasoning text contents, and it's nice to have them in contrast to the closed model "summaries", so it's easier to see what's going on.


> Bear in mind open code was also new to me so I could be just seeing thinking where I usually don't

Well there's your problem.

Edit: I remember seeing similar things with ChatGPT or Codex, although I can't remember in which context.


use hide_thinking in opencode to get the claude experience :p

I see similar things using GLM 5.1 in pi.

I had to turn off thinking traces because it was just giving me anxiety looking at it.


Eh, you're seeing raw thinking tokens. With Claude <x> 4, and I think GPT-5 series, you are no longer seeing real thinking tokens, but "summarized" tokens that are probably highly different to the raw thinking.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: