>if coding agents worked nearly as well as the hype people are selling it
I don't feel like their capabilities are substantially oversold. I think we are shown what they can do, what they can't do, and what they can't do reliably.
I only really encounter the idea that they are expected be nigh on infallible by people when people highlight a flaw as if it were proof that there is a house of cards held up by the feature they have revealed to be flawed
The problems in LLMs are myriad. Finding problems and weaknesses is how they get addressed. They will never be perfect. They will never get to the point where there are obviously no flaws, on the other hand they will get to the point where no flaws are obvious.
Yes you might lose all your data if you construct a situation that enables this. Imagine not having backups of your hard drive. Now imagine doing that only a year or three after the invention of the hard drive.
Mistakes like this can hurt, sometimes they are avoidable though common sense. Sometimes the only way to realise the risk is to be burnt by it.
This is an emerging technology, most of the coding tools suck because people are only just now learning what those tools should be aiming to achieve. Those tools that suck are the data points guiding us to better tools.
Many people expect reat things from AI in the future. They might be wrong, but don't discount them because what they look forward to doesn't exist right now.o
On the other hand there are those who are attempting to build production infrastructure on immature technology. I'm ok with that if their eyes are wide open to the risk they face. Less so if they conceal that risk from their customers.
>I don't feel like their capabilities are substantially oversold. I think we are shown what they can do, what they can't do, and what they can't do reliably.
> Mark Zuckerberg wants AI to do half of Meta's coding by 2026
> Nvidia CEO Jensen Huang would not have studied computer science today if he were a student today. He urges mastering the real world for the next AI wave.
> Salesforce CEO Marc Benioff just announced that due to a 30% productivity boost brought by AI tools, the company will stop hiring software engineers in 2025.
I don't know what narratives you have been following - but these are the people that decide where money goes in our industry.
Even people inside Salesforce don't know where this number is coming from. I asked some of my blog readers to give me insider intel on this and I only received information that there's no evidence to be seen despite multiple staff asking for clarification internally.
Most of this stuff is very, very transparently a lie.
So its the usual culling just disguised as a different theme, and AI is a convenient scapegoat now while in the same time gloating about how ahead one's company is.
There are real products and good use cases, and then there is this massive hype that can be seen also here on HN. Carefully crafted PR campaigns focusing exactly on sites like this one. Also doesn't seem sustainable cost-wise long term, most companies apart from startups will have hard time accepting paying even 10% of junior salary for such service. Maybe this will change but I doubt so.
I don't feel like their capabilities are substantially oversold. I think we are shown what they can do, what they can't do, and what they can't do reliably.
I only really encounter the idea that they are expected be nigh on infallible by people when people highlight a flaw as if it were proof that there is a house of cards held up by the feature they have revealed to be flawed
The problems in LLMs are myriad. Finding problems and weaknesses is how they get addressed. They will never be perfect. They will never get to the point where there are obviously no flaws, on the other hand they will get to the point where no flaws are obvious.
Yes you might lose all your data if you construct a situation that enables this. Imagine not having backups of your hard drive. Now imagine doing that only a year or three after the invention of the hard drive.
Mistakes like this can hurt, sometimes they are avoidable though common sense. Sometimes the only way to realise the risk is to be burnt by it.
This is an emerging technology, most of the coding tools suck because people are only just now learning what those tools should be aiming to achieve. Those tools that suck are the data points guiding us to better tools.
Many people expect reat things from AI in the future. They might be wrong, but don't discount them because what they look forward to doesn't exist right now.o
On the other hand there are those who are attempting to build production infrastructure on immature technology. I'm ok with that if their eyes are wide open to the risk they face. Less so if they conceal that risk from their customers.