Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Genuine question—how would this bug be produced in the first place?

My (limited) experience makes me think that cleartext passwords are somehow hard coded to be logged, perhaps through error logging or a feature that’s intended for testing during development.

I personally would not code a backend that allows passwords (or any sensitive strings) to be logged in any shape or form in production, so it seems a little weird to me that this mistake is considered a “bug” instead of a very careless mistake. Am I missing something?

EDIT: Thank you very much in advance!



Let's say you log requests and the POST body parameters that are sent along with them. Oops, forgot to explicitly blank out and fields known to contain passwords. Now they're saved in cleartext in the logs every time the user logs in.


Even logging a username field is likely to catch a bunch of false positives of users entering their passwords in the username input.


We made this mistake - the trick is determining what fields are sensitive, what are sensitive enough that they should be censored but included in the log, and the rest of the crud.

It turns out that this is non-trivial - when censoring how do you indicate that something was changed, while keeping the output to a minimum? blank/"null" was rejected because it would mask other problems, and "* THIS FIELD HAS BEEN REDACTED DUE TO SENSITIVE INFORMATION *" was rejected for being "too long". Currently we use "XXXXX", which has caused some intern head scratching but is otherwise fine.


Easy, you have a framework that validates & sanitizes all your parameters, don't allow any non-declared parameter, and make something like "can_be_logged" a mandatory attribute, then only log those & audit them.


I'd replace redacted fields with [redacted], or maybe U+2588


You prevent a lot of these problems by hashing passwords as soon as possible (i.e., on the client).


Wouldn't that make it easier for someone that has access to hashed passwords in the case of a database leak? They would just have to submit the username and the hashed password (which they now have).


You're right, but the attacker won't get the user's original password that they probably reuse elsewhere.

If it's just your authentication system hashes that are compromised, the damage can be contained.


In this case client side will have our algorithm (i.e in JavaScript) + private key which we will use to hash the password. If this is the case I could not see any different between giving hacker password or hash-password with algorithm and key.


While there is merit to clientside hashing, you should always hash serverside as well, lest a leak prove catastrophic.


In the context of production, why would you need to log anything other than X-Forwarded-For/X-Real-IP, timestamp, and the endpoint that was hit?


Remember that the context is a bug.

So sure you don't want to log everything in Prod, but maybe you do in Dev. In that case, a bug would be to push the dev logging configuration to Prod. Oops.

If you have the clear text password at any point in your codebase, then there is no full-proof way to prevent to log it unintentionally as the result of a bug. You just have to be extra-careful ( code review, minimal amount code manipulating it, prod-like testing environment with log scanner, ...)


Because when fatal exceptions happen you want to know what the request was. It helps debug what went wrong.


Not exactly log files, but I once noticed a C coredump contained raw passwords in strings that had been free'd but not explicitly overwritten. Similar to how Facebook "deletes" files by merely marking them as deleted, "free" works the same way in C, the memory isn't actually overwritten until something else writes onto it.


But if you have access to the programs memory you have access to all the post requests anyway.


Aren't coredumps static copies of the memory state at time of termination - usually unplanned? So not really the same thing as having ongoing access to a program's memory; I can't really see a debugging process that would involve viewing memory in a dynamic way, whereas it's somewhat of a concern if coredumps (an important debugging tool) reveal plaintext passwords.


Your getting a lot of what I would consider bad responses.

There are ways with downsides to mitigate the risk logging requests.

HMAC with time component will render the data useless before long. Essentially OTP. Downside client time needs to be accurate.

Negotiate a shared key ala NTLM. Downside more round trips; essentially establishing encrypted transport inside encrypted transport (https).


Careless mistakes are probably one of the most common types of bug you’ll find in the wild




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: