Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems pretty simple, piping bash commands into other bash commands and other text stream juggling is a pretty typical use of these commands and so changing what stream is output can change the behavior of consumers of the output of these functions.

I haven’t done anything with fgrep and egrep before but piping grep into another grep for more complex classes of text search is something i use a lot.



It's more than likely the warning will be printed to stderr, not out, so there will be no impact on the actual work done.


Automatically monitoring the stderr from cron jobs for unusual outputs is a prudent measure, and its plausible that this change will increase the burden of false positives (it certainly will not reduce it.)


But if you’re monitoring the output it usually means you are in a position to fix problems which means you can likely update the script in question to use the new warning-less invocation.


If I had been woken up in the middle of the night or had a vacation interrupted on account of this, I would not be entertaining warm and grateful thoughts toward whoever thought it was a good idea.


If you're being woken in the middle of the night over this then your testing infrastructure is crap.

This is something that should be caught before it gets to that point SPECIFICALLY so you aren't getting woken up in the middle of the night.


All the replies so far are missing the point: it is prudent to monitor for unusual events, including previously-unseen messages, over and above the explicit handling of specific errors. It is also prudent to not wait until morning to investigate.

Your infrastructure probably is crap, as very few people get to build it themselves from scratch. That does not mean one should cheerfully accept additional unnecessary or pedantic complications.

It would also be prudent to investigate each and every change to any of the software you use, in order to anticipate problems, but unnecessary and pedantic changes increase the burden there, as well.


Wouldn't you test before upgrading packages in production? And usually you'd want to schedule any upgrades so that the next day or two has coverage from someone who can deal with any issues that arise.


I have yet to work at a place that didn’t have systems running mission-critical shell scripts with little to no SDLC on boxes that got periodic “yum update -y”s. There seems to be a difference in oversight of software we write & “the operating system”.

Should we do better? Absolutely! Will this burn people if vendors don’t take care? Also absolutely!


What if a cron.monthly or cron.weekly script calls egrep? Congrats now you get a lot of noise from cron stderr emails in the distant future.


I don't know if I have sympathy for this argument.

Your script ostensibly handles (at least logs) errors and warnings right? Do you exhaustively handle every single error and warning in a unique and different way or do you have a catchall "If non-0 return code then fail"? How does introducing new output to stderr affect that?


hence why I wrote "actual work done".

your scripts will continue to produce the expected output. the side effects, otoh, will change indeed.


The simple fact that you have to wonder about that question is the failure.

Everything about this, even this comment I'm writing right now, is a waste of time.


Can confirm:

    $ egrep '.' < <(grep --version) > /dev/null
    egrep: warning: egrep is obsolescent; using grep -E


It's not unusual in shell scripts to combine stderr with stdout by using "2>&1" or similar.


It is, however, very unusual to do so and then try to parse the output. Aside from compilers, what other CLI tools make any guarantees wrt what they print to stderr?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: