If it's just a distillation of GPT-4, wouldn't we expect it to have worse quality than o1? But I've seen countless examples of DeepSeek-r1 solving math problems that o1 cannot.
>Very often, DeepSeek tells you it's ChatGPT or OpenAI; it's actually quite easy to get it to do that. Some say that's related to "the background radiation on the post-AI internet". I'm not a fentanyl consumer so, unfortunately, I think that argument is trash.
The exact same thing happened with Llama. Sometimes it also claimed to be Google Assistant or Amazon Alexa.
Are you sure you checked R1 and not V3? By default, R1 is disabled in their UI.
Prompt: Find an English word that contains 4 'S' letters and 3 'T' letters.
Deepseek-R1: stethoscopists (correct, thought for 207 seconds)
ChatGPT-o1: substantialists (correct, thought for 188 seconds)
ChatGPT-4o: statistics (wrong) (even with "let's think step by step")
In almost every example I provide, it's on par with o1 and better than 4o.
>substantially wrong on benchmarks like ARC which is designed with this in mind.
Wasn't it revealed OpenAI trained their model on that benchmark specifically? And had access to the entire dataset?
>Very often, DeepSeek tells you it's ChatGPT or OpenAI; it's actually quite easy to get it to do that. Some say that's related to "the background radiation on the post-AI internet". I'm not a fentanyl consumer so, unfortunately, I think that argument is trash.
The exact same thing happened with Llama. Sometimes it also claimed to be Google Assistant or Amazon Alexa.