As a devops/infrastructure guy, this drives me nuts. You abstract everything away, but then are shocked when you don't understand what's happening when the underlying layers fail or break down in unexpected places ways.
I understand your frustration, but the field has become too big for a single (average) person. When problems arise , you team up with someone who knows more about the issue or you learn on the spot. Nothing wrong with not knowing everything.
That's not my problem. My problem is the illusion presented that you don't need to understand the underlying infrastructure. Far too often I see infrastructure as an afterthought, secondary to the code.
What kind of organisations have you worked in? I've been lucky (?) to work in an organisation with a dedicated network/infrastructure team. My work life got a lot better when I gained more understanding of how the network was put together (vis-a-vis load balancers, DNS, data center locations etc).
I still don't have a "low level" understanding of the network I use.
I've always managed or been on infrastructure teams at other orgs; my latest gig is at a startup, and I'm the only infrastructure guy. Perhaps that is the problem.
More likely when problems arise you "team up" with someone who knows [almost] everything.
And I agree with other posters that this isn't something new. That's the separation between senior and junior specialists (and no - if one doesn't know what network latency is, that one is not a senior frontend engineer).
Fundamentals as small, compact and trivial. Everyone can get a good grasp of all the basics before going on to any "real world" coding with its diverse, needless complexity.
It's rapidly becoming a two tier world with two types of programmers: those who are well trained in coding within known environments, and those who are like the folks in The Matrix films who ate the red pill-- they know what's really going on.
I'm an embedded systems engineer. It's always been that way for me: I own and compile the bootloader, kernel, filesystem, runtime libraries, and the application code. I have the schematics and gerbers for my boards. I have the reference manuals for every chip.
What's weird is watching people play with stuff like Raspberry Pi and they expect things to just apt-get and launch like it's always been done for them. And most of the time it does just work that way. Except when it doesn't.
many development enviroments would do good with some graphics representation of how much of the plattforms capabilitys is actually used up during a test run, how cache friendly the code is, and how much your latest changes affected this. And this coloured columns need to show in every svn/git commit message.
Profiling needs to be not something that the embeded guy does after you messed it up.
Jerry Commited: +25% more time spend in function xy, due to chace unfriendlyness..
If you see it, and your colleagues see it, it becomes a issue. If it becomes a issue the perpetrator will look into it.
For some of them, absolutely. I change the assembly output and reassemble to verify that the atomics are necessary and fixing the cases I think they do.
As far as I'm concerned, if you're not removing the atomic instructions to make sure the breaks happen that you expect to happen, you're not really testing your code.
You don't really have to know the internals but I personally think you ought to know the abstactions themselves at the very least. To be more specific, you need not know say the internals of your compiler but you ought to know how compilers in general work. Same with protocols, filesystems, OS {vm,scheduler} ...etc.
A good developer must know at least about the standard latencies and about the memory hierarchy. Otherwise it is a really bad developer who is going to keep annoying the innocent users.