Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think it's true, but am I alone in wishing it was? My world is disrupted somewhat but so far I don't think we have a thing that upends our way of life completely yet. If it stayed exactly this good I'd be pretty content.


I agree with your sentiment, but I think we've yet to see the full application of the current technology. (Even if LLMs themselves don't improve, there's significant opportunity for people to use it in ways not currently being done)


The issue with llm’s is trust.

I don’t see that ever going away. Humans have learned to trust other humans over a large time scale with rules in place to control behaviour.


That's a big problem with very specific manifestations. My startup helps customers handle regulatory compliance, also by forwarding complex questions to a pool of consultants.

We've compared now more than a hundred replies to that of GPT Pro, and the quality is roughly the same. Sometimes a little worse, sometimes a little better. Always more detailed. Never unacceptable.

But how to convince our customers that we have the right technology and know how to use it appropriately? We're trying, but it's not easy.

Part of that's accountability. In the event of the LLM producing rubbish, as rare as it may be, who is accountable? There is not a person and her reputation attached to it.


Yup exactly.

Being able to hold someone liable for a F up has been how we have been able to function as a society and get to where we are today.


LLM plus human should be better than either standalone. You won’t be able to make as much money scaling out, though.

You won’t be able to scale out and make as much money though. But surely you’re not only concerned about profit, right? What’s the point of life if you’re just trying to get rich.


When the dust settles, for example if LLM's were to stop improving today, we would come to learn their exact capabilities, what they can do reliably and what they can't.

Once we know what they can do well and how to get them to do it well, and what they can't, you could say we "trust" them to do the first category well and just stop trying to get it to do the second category.


This feeds the adoption problem, though: a lot of companies are thinking "why settle for the current models when even the vendors are saying the models in six months will be exponentially better? Let's let the early adopters work out the bugs and move when these things are more stable"


LLMs are random by nature, they might something done one time but miserably fail the next


I think we're getting to a point where LLM randomness is relevant to someone writing a white paper on LLMs, but not as relevant to consumers of them. Yes the technology uses randomness, but the quality of response somehow still seems very consistent and predictable in 2026.


Yeah, and we will continue to learn to use them where the amount of random failure is acceptable or can be mitigated or reduced with additional tools.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: