Qwen has previously engaged in deceptive benchmark hacking. They previously claimed SOTA coding performance back in January and there's a good reason that no software engineer you know was writing code with Qwen 2.5.
Maybe not the big general purpose models, but Qwen 2.5 Coder was quite popular. Aside from people using it directly I believe Zed's Zeta was a fine-tune of the base model.
There is also paranoia that the Chinese government may compel their tech companies to play dirty tricks on their users. Yet without a trace of irony the critics have nothing to say about this not-so-secret practice for US based technology companies.
Clearly the thing we should want is a healthy, international AI ecosystem characterized both by cooperation and by competition, so that we are free to choose between models developed under different conditions, for compliance with different laws, subject to different cultures and biases, pressured by different interests, etc.
To the extent that there's a solution, the solution is choice!
Those hurdles exist because they're worse for most people. You think Cursor wouldn't spin up their own Qwen inference cluster or contract with someone who can if doing so would give them SOTA code editing performance against Claude?
https://winbuzzer.com/2025/01/29/alibabas-new-qwen-2-5-max-m...
Alibaba is not a company whose culture is conducive to earnest acknowledgement that they are behind SOTA.