Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t completely disagree, but “assertion one” [1]

[1] ~ you can obviously verify this yourself by doing it yourself and seeing how expensive it is.

…is an enormously weak argument.

You suppose. You guess. We guess.

Let’s be honest, you can just stop at:

> I don’t think so.

Fair. I don’t either; but that’s about all we can really get at the moment afaik.



No, the point of [1] is that this is not some "secret knowledge". My response is based on running models in production and comparing my costs with the costs I would pay to API providers running the same models.


he's not wrong, if you can run a open weights model in any cloud, you can very straightforwardly estimate the cost of running the model. considering that these providers either use long-term contracts or maybe even buy their own hardware, this theoretical cloud deployment is itself an overestimate of the costs


…and its perfectly legit to run that, write the numbers down and link to it.

But:

A) it makes absolutely no difference to the fact you have no idea what the big LLM providers are actually doing.

B) Just asserting some random thing and saying “anyone competent can verify this themselves” is a weak argument. Youre saying youve done the research, but failing to provide any evidence you actual have

If youve crunched the numbers then man up and post them.

If not, then stop at “I think…”

“This is based on my experience running production workloads…” is a nice way of saying “I dont have any data to backup what Im saying”.

If you did, you could just link to it.

…by not posting data you make your argument non-falisifyable.

It is just an oppinion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: