It seems pretty simple, piping bash commands into other bash commands and other text stream juggling is a pretty typical use of these commands and so changing what stream is output can change the behavior of consumers of the output of these functions.
I haven’t done anything with fgrep and egrep before but piping grep into another grep for more complex classes of text search is something i use a lot.
Automatically monitoring the stderr from cron jobs for unusual outputs is a prudent measure, and its plausible that this change will increase the burden of false positives (it certainly will not reduce it.)
But if you’re monitoring the output it usually means you are in a position to fix problems which means you can likely update the script in question to use the new warning-less invocation.
If I had been woken up in the middle of the night or had a vacation interrupted on account of this, I would not be entertaining warm and grateful thoughts toward whoever thought it was a good idea.
All the replies so far are missing the point: it is prudent to monitor for unusual events, including previously-unseen messages, over and above the explicit handling of specific errors. It is also prudent to not wait until morning to investigate.
Your infrastructure probably is crap, as very few people get to build it themselves from scratch. That does not mean one should cheerfully accept additional unnecessary or pedantic complications.
It would also be prudent to investigate each and every change to any of the software you use, in order to anticipate problems, but unnecessary and pedantic changes increase the burden there, as well.
Wouldn't you test before upgrading packages in production? And usually you'd want to schedule any upgrades so that the next day or two has coverage from someone who can deal with any issues that arise.
I have yet to work at a place that didn’t have systems running mission-critical shell scripts with little to no SDLC on boxes that got periodic “yum update -y”s. There seems to be a difference in oversight of software we write & “the operating system”.
Should we do better? Absolutely! Will this burn people if vendors don’t take care? Also absolutely!
I don't know if I have sympathy for this argument.
Your script ostensibly handles (at least logs) errors and warnings right? Do you exhaustively handle every single error and warning in a unique and different way or do you have a catchall "If non-0 return code then fail"? How does introducing new output to stderr affect that?
It is, however, very unusual to do so and then try to parse the output. Aside from compilers, what other CLI tools make any guarantees wrt what they print to stderr?
I haven’t done anything with fgrep and egrep before but piping grep into another grep for more complex classes of text search is something i use a lot.