The current version of sglang allows inference with the R1 model at a cost that is very close to the rate that DeekSeep claimed (using H100s, not exactly the DeepSeek compute). Their claim is almost validated by replication at this point so there is nothing left to take with a grain of salt other than the possibility that there exists potentially an even higher margin than what they claimed if one were to optimize for modern NVidia hardware.