It boils down to a culture problem, while communities in safer systems programming languages embrace having a panic on signed integer overflow, in the C languages world suggesting the use of -ftrapv (or similar) will make them reach out for the pitchforks.
The linters and compiler security flags are there, the problem is getting them adopted.
Of course, a panic is also a failure. If may be a less serious one, or it may just make your rocket explode on take-off and kill anything down-range while the result of the incorrect computation would otherwise have been irrelevant.
One of the difficulties I've had with the 'safer systems programming languages' advocacy is that since something going wrong is inherent and unavoidable -- since the flaw is ultimately in the user's code -- there is a tendency to pretend that the panic isn't something going wrong. In my experience this has result in measurably lower quality code from these communities, code which panics in slightly unexpected conditions -- while something written in C would not (yet may fail in a worse way when it does fail).
I don't think I've yet managed to download and run anything written in rust where it doesn't panic within the first 15 minutes of usage-- except the rust compiler itself and firefox (though I do now frequently get firefox crashes that are rust panics).
It may well be that the increased runtime sensitivity to programmer errors in these languages inherently mean we should expect more runtime failures as previously benign mistakes are exposed, and ought to accept that software written in these languages may be less reliable on aggregate because when it does fail its less likely to create security problems and that this is a worthwhile tradeoff. (Python users sure seem to survive a near constant rate of surprising runtime failures...)
But to the extent and so long as language advocates pretend that panics aren't failures they can't really advocate for the trade-off, advance better static analysis to reduce the gap, and will continue to seem fundamentally dishonest to people who try to use the languages and software written in them and experience the frequent panics first hand.
The difference between Rust and Java is that Rust developers decided that panics shouldn't be recoverable except as an afterthought for C compatibility.
In principle most panics are amendable to retrying the operation which would be the equivalent of catching exceptions in Java. So yes you get an "error has occurred" warning but your program doesn't terminate immediately. I don't think C has an edge over Java here.
How common is it for java code to handle exceptions in useful ways rather than just fail in even more inexplicable ways due to no one ever having conceived of much less tested those code paths being executed?
A panic makes an error situation visible, the way of C can lead to unnoticeable error situation go for longer than expected corrupting data in more inrecovereable ways than just crashing right there on the spot.
A bit like having warnings as errors, or deciding to ignore warnings as peril to what might come later without the feedback of what those warnigns were all about.
Yes, but visible at runtime. Depending on the situation you may well prefer* the silent failure. Many such silent failures are completely benign, e.g. the result of the wrong code (or whatever it corrupted) wasn't subsequently used.
*would prefer if you actually got to pick. But you don't get to pick because once you know of the bug you fix it either way.
Warnings as errors isn't a great example, because if you do it in code distributed to third parties its an absolute disaster as the warnings are not stable and there are constantly shifting false positives. It's perhaps not a good example even without distributing it, because it can lead to hasty "make it compile" 'fixes' that can introduce serious (and inherently warning undetectable) bugs. It's arguably better to have warnings warn until you have the time to look at them and handle them seriously, so long as they don't get missed.
The parallel doesn't carry through to undefined behavior because the undefined behavior isn't logging a warning that you could check out later (e.g. before cutting a release).
However, culture results in artefacts. You mostly won't find American Football Stadiums in England's cities, because it's not part of their culture. If the English suddenly took to this game, such stadiums likely would take as much as several decades to become widespread.
C libraries like OpenSSL reflect what's culturally appropriate in that language, so even if you came to C from a language with a different culture, too bad it has the culturally appropriate API design and behaviour.
I think that OpenSSL has historically reflected a rather antiquated C culture that most software moved on from long ago, FWIW.
A clear example of this is OpenSSL intentionally mixing uninitialized memory into its randomness pool (because on some obscure and long forgotten platforms it was the only way they had to get any 'randomness'), resulting in any programs written using it absolutely spewing valgrind errors all over the place. (Unless your openssl has been compiled with -DPURIFY to skip that behavior, or had the debian "fix" of bypassing the rng almost completely :P ).
I think the OpenSSL situation you're talking about arises because of a mistake by a maintainer.
MD_Update(&m,buf,j);
Kurt Roeckx found this line twice in OpenSSL. Valgrind moaned about this code and Kurt proposed removing it. Nobody objected, so in Debian Kurt removed the two lines.
One of these occasions is, as you described, mixing uninitialized (in practice likely zero) bytes into a pool of other data and removing it does indeed silence the Valgrind error and fixes the problem. The other, however is actually how real random numbers get fed into OpenSSL's "entropy pool", by removing it there is no entropy and the result was the "Debian keys" - predictable keys "randomly" generated by affected OpenSSL builds.
I haven't seen OpenSSL people claim that the first, erroneous, call was somehow supposed to make OpenSSL produce random bits on some hypothetical platform where the contents of uninitialised memory doesn't start as zero, it looks more like ordinary C programmer laziness to me.
The odd thing with that incident is that the "PURIFY" define long predated it-- the correct fix in debian should have been "Just compile with DPURIFY"-- I believe redhat was already doing so at the time.
> I haven't seen OpenSSL people claim that the first, erroneous, call was somehow supposed to make OpenSSL produce random bits on some hypothetical platform where the contents of uninitialised memory doesn't start as zero
I had an openssl dev explain (in person) to to me when I complained about the default behavior: that there had been platforms that depended on that behavior, that they weren't sure that which ones did, and so it didn't seem safe to eliminate it. (I'd complained because I couldn't have users with non -DPURIFY openssl code run valgrind as part of troubleshooting). IIRC the use of uninitialized memory was intentional and remarked on in comments in the code.
- If the "uninitialized" data is actually somehow some kind of interference.
- In LLVM, using a "undef" value will not always do the same thing each time; however, the "freeze" command can be used to avoid that problem. (I don't know if this feature of LLVM can be accessed from C codes, or how the similar things are working in GCC.)
- If the code seems unusual, then you should write comments to explain why it is written in the way that it is. (You can then also know what considerations to make if you want to remove it.)
- Whether or not there is uninitialized data, you will need to make proper entropy too, from other properly entropy data.
The linters and compiler security flags are there, the problem is getting them adopted.