LLMs can't be strategic because they do not understand the big picture -- that the real work of good software is balancing a hundred different constraints in a way that produces the optimal result for the humans who use it.
It's not all that different from the state of big corp software today! Large organizations with layers of management tend to lose all abiliy to keep a consistent strategy. They tend to go all in on a single dimension such as ROI for the next quarter, but it misses the bigger picture. Good software is about creating longer term value and takes consistent skill & vision to execute.
Those software engineers who focus on this big picture thinking are going to be more valuable than ever.
>Good software is about creating longer term value and takes consistent skill & vision to execute.
>Those software engineers who focus on this big picture thinking are going to be more valuable than ever.
Not to rain on our hopes, but AI can give us some options and we can pick the best. I think this eliminates all middle level positions. Newbies are low cost and make decisions that are low stakes. The most senior or seniors can make 30 major decisions per day when AI lays them out.
I own a software shop and my hires have been: Interns and people with the specific skill of my industry(Mechanical engineers).
2 years ago, I hired experienced programmers. Now I turn my mechanical engineers into programmers.
So what you are a saying is that you removed the people who can make the decisions that keep your software maintainable and kept the people who will slowly over time cause your software to become less maintainable? I'm not sure that tradeoff is a a good one.
Not to rain on our hopes, but AI can give us some options and we can pick the best.
a.k.a. greedy algorithms, a subject those of us on HN should be well-acquainted with. You can watch the horizon effect frequently play out in corporate decisionmaking.
> Not to rain on our hopes, but AI can give us some options and we can pick the best.
But that's kind of my point. A bunch of decisions like that tend to end up with a "random walk" effect. It's a bunch of tactical choices which don't add up to something strategic. It could be, but it takes the human in the loop to hold onto that overall strategy.
Why can’t LLMs understand the big picture? I mean, a lot of companies have most of their information available in a digital form at this point, so it could be consumed by the LLM.
I think if anything, we have a better chance in the little picture: you can go to lunch with your engineering coworkers or talk to somebody on the factory floor and get insights that will never touch the computers.
Giant systems of constraints, optimizing many-dimensional user metrics: eventually we will hit the wall where it is easier to add RAM to machines than humans.
Most senior could make sense (although I’d like to see a collection of independent guilds coordinated by an LLM “CEO” just to see how it could work—might not be good enough yet, but it’d be an interesting experiment).
Ultimately, I suspect “AI” (although, maybe much more advanced than current LLMs) will be able to do just about any information based task. But in the end only humans can actually be responsible/accountable.
> Because LLMs don't understand things to begin with.
Ok, that’s fair. But I think the comment was making a distinction between the big picture and other types of “understanding.” I agree that it is incorrect to say LLMs understand anything, but I think that was just an informal turn of phrase. I’m saying I don’t think there’s something special about “big picture” information processing tasks, compared to in-detail information processing tasks, that makes them uniquely impossible for LLM.
The other objections seem mostly to be issues with current tooling, or the sort of capacity problems that the LLM developers are constantly overcoming.
I would say that it's very germane to my original statement. Understanding is absolutely fundamental to strategy and it is pretty much why I can say LLMs can't be strategic.
To really strategize you have to have mental model of well, everything and be able to sift through that model to know what elements are critical or not. And it includes absolutely everything -- human psychology to understand how people might feel about certain features or usage models, the future outlook for what popular framework to choose and will it as viable next year as it is today. The geographic and geopolitics of which cloud provider to use. The knowledge of human sentiment around ethical or moral concerns. The financial outlook for VC funding and interest rates. The list goes on and on. The scope of what information may be relevant is unlimited in time and space. It needs creativity, imagination, intuition, inventiveness, discernment.
Task Time horizons are improving exponentially with doubling times around 4 months per METR. At what timescale would you accept that they "can be strategic"? Theres little reason to think they wont be at multi week or month time horizons very soon. Do you need to be strategic to complete multi month tasks?
> LLMs can't be strategic because they do not understand the big picture -- that the real work of good software is balancing a hundred different constraints in a way that produces the optimal result for the humans who use it.
There’s good reason to think that they could understand the big picture just fine, even today, except that they’re currently severely constrained by what we choose, or have time, to tell them. They can already easily give a much more comprehensive survey of suitable options for solving a given problem than most humans can.
If they had more direct access to the information we have access to, that we currently grudgingly dole out to them in dribs and drabs, they would be much more capable.
> It was a nice little feature that I knew exactly how to do, but I hadn’t prioritized getting done yet because there were a bunch of other things on my plate. But with a little assist, it was quick to implement.
Exactly how I feel. AI has allowed me to work on projects that I've wanted to work on but didn't have the time/energy for.
I don't feel like the abstraction away from assembly language resulted in fewer software engineering jobs. Nor do I feel like Java's virtual machine resulted in fewer systems engineering jobs. Somehow I don't feel that writing in English rather than pure logic will result in fewer engineering problems either. A lot more actually. But at least we'll get the requirements out of users into something concrete faster.
What is definitely going to be abundantly clear is just how much better machines can get at creating correct code and how bad each of us truly is at this. That's an ego hit.
The loving effort an artisan puts into a perfect pot still has wabi sabi from the human error; whereas a factory produced pot is way more perfect and possesses both a Quality from closeness to Idealism and an eerieness from its unnaturalness.
However, the demand for artisan pottery has niched out compared to Ikea bowls, so that's just how it is.
It's not all that different from the state of big corp software today! Large organizations with layers of management tend to lose all abiliy to keep a consistent strategy. They tend to go all in on a single dimension such as ROI for the next quarter, but it misses the bigger picture. Good software is about creating longer term value and takes consistent skill & vision to execute.
Those software engineers who focus on this big picture thinking are going to be more valuable than ever.
reply