Genuine question—how would this bug be produced in the first place?
My (limited) experience makes me think that cleartext passwords are somehow hard coded to be logged, perhaps through error logging or a feature that’s intended for testing during development.
I personally would not code a backend that allows passwords (or any sensitive strings) to be logged in any shape or form in production, so it seems a little weird to me that this mistake is considered a “bug” instead of a very careless mistake. Am I missing something?
Let's say you log requests and the POST body parameters that are sent along with them. Oops, forgot to explicitly blank out and fields known to contain passwords. Now they're saved in cleartext in the logs every time the user logs in.
We made this mistake - the trick is determining what fields are sensitive, what are sensitive enough that they should be censored but included in the log, and the rest of the crud.
It turns out that this is non-trivial - when censoring how do you indicate that something was changed, while keeping the output to a minimum? blank/"null" was rejected because it would mask other problems, and "* THIS FIELD HAS BEEN REDACTED DUE TO SENSITIVE INFORMATION *" was rejected for being "too long". Currently we use "XXXXX", which has caused some intern head scratching but is otherwise fine.
Easy, you have a framework that validates & sanitizes all your parameters, don't allow any non-declared parameter, and make something like "can_be_logged" a mandatory attribute, then only log those & audit them.
Wouldn't that make it easier for someone that has access to hashed passwords in the case of a database leak? They would just have to submit the username and the hashed password (which they now have).
In this case client side will have our algorithm (i.e in JavaScript) + private key which we will use to hash the password. If this is the case I could not see any different between giving hacker password or hash-password with algorithm and key.
So sure you don't want to log everything in Prod, but maybe you do in Dev. In that case, a bug would be to push the dev logging configuration to Prod. Oops.
If you have the clear text password at any point in your codebase, then there is no full-proof way to prevent to log it unintentionally as the result of a bug. You just have to be extra-careful ( code review, minimal amount code manipulating it, prod-like testing environment with log scanner, ...)
Not exactly log files, but I once noticed a C coredump contained raw passwords in strings that had been free'd but not explicitly overwritten. Similar to how Facebook "deletes" files by merely marking them as deleted, "free" works the same way in C, the memory isn't actually overwritten until something else writes onto it.
Aren't coredumps static copies of the memory state at time of termination - usually unplanned? So not really the same thing as having ongoing access to a program's memory; I can't really see a debugging process that would involve viewing memory in a dynamic way, whereas it's somewhat of a concern if coredumps (an important debugging tool) reveal plaintext passwords.
My (limited) experience makes me think that cleartext passwords are somehow hard coded to be logged, perhaps through error logging or a feature that’s intended for testing during development.
I personally would not code a backend that allows passwords (or any sensitive strings) to be logged in any shape or form in production, so it seems a little weird to me that this mistake is considered a “bug” instead of a very careless mistake. Am I missing something?
EDIT: Thank you very much in advance!